Test Report: KVM_Linux_containerd 11504

                    
                      773500bc74bd75ddc5ffa547d8fa571191ff1ba1
                    
                

Test fail (15/260)

x
+
TestAddons/parallel/Registry (72.17s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:297: registry stabilized in 24.319893ms

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:299: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/Registry
helpers_test.go:335: "registry-w8bc5" [fa3135ca-eb52-4bc8-a566-cbbd4f91a976] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:299: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.018093771s

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:302: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/Registry
helpers_test.go:335: "registry-proxy-pl29f" [f2b7e3b5-8d72-4582-a329-e0216381f838] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:302: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.00938417s
addons_test.go:307: (dbg) Run:  kubectl --context addons-20210526204012-510955 delete po -l run=registry-test --now

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:312: (dbg) Run:  kubectl --context addons-20210526204012-510955 run --rm registry-test --restart=Never --image=busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:312: (dbg) Non-zero exit: kubectl --context addons-20210526204012-510955 run --rm registry-test --restart=Never --image=busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m1.121966759s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:314: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-20210526204012-510955 run --rm registry-test --restart=Never --image=busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:318: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210526204012-510955 ip
2021/05/26 20:55:31 [DEBUG] GET http://192.168.39.182:5000
addons_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210526204012-510955 addons disable registry --alsologtostderr -v=1
helpers_test.go:218: -----------------------post-mortem--------------------------------
helpers_test.go:235: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-20210526204012-510955 -n addons-20210526204012-510955
helpers_test.go:235: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p addons-20210526204012-510955 -n addons-20210526204012-510955: exit status 3 (262.009838ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0526 20:55:32.006758  512037 status.go:374] failed to get storage capacity of /var: sh -c "df -h /var | awk 'NR==2{print $5}'": Process exited with status 127
	stdout:
	
	stderr:
	sh: /bin/awk: No such file or directory
	E0526 20:55:32.006783  512037 status.go:247] status error: sh -c "df -h /var | awk 'NR==2{print $5}'": Process exited with status 127
	stdout:
	
	stderr:
	sh: /bin/awk: No such file or directory

                                                
                                                
** /stderr **
helpers_test.go:235: status error: exit status 3 (may be ok)
helpers_test.go:237: "addons-20210526204012-510955" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestAddons/parallel/Registry (72.17s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (242.42s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:158: (dbg) TestAddons/parallel/Ingress: waiting 12m0s for pods matching "app.kubernetes.io/name=ingress-nginx" in namespace "ingress-nginx" ...

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
helpers_test.go:335: "ingress-nginx-admission-create-tdf49" [bcd2bef4-cad0-4dc5-a2ef-237a18b27356] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:158: (dbg) TestAddons/parallel/Ingress: app.kubernetes.io/name=ingress-nginx healthy within 9.608465ms
addons_test.go:165: (dbg) Run:  kubectl --context addons-20210526204012-510955 replace --force -f testdata/nginx-ingv1beta.yaml

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:170: kubectl --context addons-20210526204012-510955 replace --force -f testdata/nginx-ingv1beta.yaml: unexpected stderr: Warning: networking.k8s.io/v1beta1 Ingress is deprecated in v1.19+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
(may be temporary)
addons_test.go:180: (dbg) Run:  kubectl --context addons-20210526204012-510955 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:180: (dbg) Done: kubectl --context addons-20210526204012-510955 replace --force -f testdata/nginx-pod-svc.yaml: (1.346804587s)
addons_test.go:185: (dbg) TestAddons/parallel/Ingress: waiting 4m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:335: "nginx" [b337afaf-1bdf-4b1c-8d8f-f51d0e000c07] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:185: ***** TestAddons/parallel/Ingress: pod "run=nginx" failed to start within 4m0s: timed out waiting for the condition ****
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-20210526204012-510955 -n addons-20210526204012-510955
addons_test.go:185: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-20210526204012-510955 -n addons-20210526204012-510955: exit status 3 (233.364993ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0526 20:58:22.035652  512120 status.go:374] failed to get storage capacity of /var: sh -c "df -h /var | awk 'NR==2{print $5}'": Process exited with status 127
	stdout:
	
	stderr:
	sh: /bin/awk: No such file or directory
	E0526 20:58:22.035668  512120 status.go:247] status error: sh -c "df -h /var | awk 'NR==2{print $5}'": Process exited with status 127
	stdout:
	
	stderr:
	sh: /bin/awk: No such file or directory

                                                
                                                
** /stderr **
addons_test.go:185: status error: exit status 3 (may be ok)
addons_test.go:185: "addons-20210526204012-510955" apiserver is not running, skipping kubectl commands (state="Nonexistent")
addons_test.go:186: failed waiting for ngnix pod: run=nginx within 4m0s: timed out waiting for the condition
helpers_test.go:218: -----------------------post-mortem--------------------------------
helpers_test.go:235: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-20210526204012-510955 -n addons-20210526204012-510955
helpers_test.go:235: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p addons-20210526204012-510955 -n addons-20210526204012-510955: exit status 3 (214.922993ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0526 20:58:22.250644  512151 status.go:374] failed to get storage capacity of /var: sh -c "df -h /var | awk 'NR==2{print $5}'": Process exited with status 127
	stdout:
	
	stderr:
	sh: /bin/awk: No such file or directory
	E0526 20:58:22.250668  512151 status.go:247] status error: sh -c "df -h /var | awk 'NR==2{print $5}'": Process exited with status 127
	stdout:
	
	stderr:
	sh: /bin/awk: No such file or directory

                                                
                                                
** /stderr **
helpers_test.go:235: status error: exit status 3 (may be ok)
helpers_test.go:237: "addons-20210526204012-510955" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestAddons/parallel/Ingress (242.42s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (135.04s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:423: tiller-deploy stabilized in 4.043259ms
addons_test.go:425: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:335: "tiller-deploy-7c86b7fbdf-d4fv8" [4abc0fe8-5f56-461c-8bb8-1d182fb363d9] Running
addons_test.go:425: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.010410021s
addons_test.go:440: (dbg) Run:  kubectl --context addons-20210526204012-510955 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system --serviceaccount=tiller -- version

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:440: (dbg) Non-zero exit: kubectl --context addons-20210526204012-510955 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system --serviceaccount=tiller -- version: exit status 1 (1m1.120241971s)

                                                
                                                
-- stdout --
	pod "helm-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:440: (dbg) Run:  kubectl --context addons-20210526204012-510955 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system --serviceaccount=tiller -- version
addons_test.go:440: (dbg) Non-zero exit: kubectl --context addons-20210526204012-510955 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system --serviceaccount=tiller -- version: exit status 1 (1.126370588s)

                                                
                                                
** stderr ** 
	Error from server (AlreadyExists): object is being deleted: pods "helm-test" already exists

                                                
                                                
** /stderr **
addons_test.go:440: (dbg) Run:  kubectl --context addons-20210526204012-510955 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system --serviceaccount=tiller -- version
addons_test.go:440: (dbg) Non-zero exit: kubectl --context addons-20210526204012-510955 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system --serviceaccount=tiller -- version: exit status 1 (1.124996352s)

                                                
                                                
** stderr ** 
	Error from server (AlreadyExists): object is being deleted: pods "helm-test" already exists

                                                
                                                
** /stderr **
addons_test.go:440: (dbg) Run:  kubectl --context addons-20210526204012-510955 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system --serviceaccount=tiller -- version
addons_test.go:440: (dbg) Non-zero exit: kubectl --context addons-20210526204012-510955 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system --serviceaccount=tiller -- version: exit status 1 (1.124125971s)

                                                
                                                
** stderr ** 
	Error from server (AlreadyExists): object is being deleted: pods "helm-test" already exists

                                                
                                                
** /stderr **
addons_test.go:440: (dbg) Run:  kubectl --context addons-20210526204012-510955 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system --serviceaccount=tiller -- version

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:440: (dbg) Non-zero exit: kubectl --context addons-20210526204012-510955 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system --serviceaccount=tiller -- version: exit status 1 (1m1.150753885s)

                                                
                                                
-- stdout --
	pod "helm-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:454: failed checking helm tiller: exit status 1
addons_test.go:457: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210526204012-510955 addons disable helm-tiller --alsologtostderr -v=1
helpers_test.go:218: -----------------------post-mortem--------------------------------
helpers_test.go:235: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-20210526204012-510955 -n addons-20210526204012-510955
helpers_test.go:235: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p addons-20210526204012-510955 -n addons-20210526204012-510955: exit status 3 (234.965151ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0526 21:00:37.289304  512288 status.go:374] failed to get storage capacity of /var: sh -c "df -h /var | awk 'NR==2{print $5}'": Process exited with status 127
	stdout:
	
	stderr:
	sh: /bin/awk: No such file or directory
	E0526 21:00:37.289324  512288 status.go:247] status error: sh -c "df -h /var | awk 'NR==2{print $5}'": Process exited with status 127
	stdout:
	
	stderr:
	sh: /bin/awk: No such file or directory

                                                
                                                
** /stderr **
helpers_test.go:235: status error: exit status 3 (may be ok)
helpers_test.go:237: "addons-20210526204012-510955" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestAddons/parallel/HelmTiller (135.04s)

                                                
                                    
x
+
TestAddons/parallel/Olm (669.07s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:476: catalog-operator stabilized in 24.065166ms

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:480: olm-operator stabilized in 28.01678ms
addons_test.go:484: packageserver stabilized in 31.350623ms
addons_test.go:486: (dbg) TestAddons/parallel/Olm: waiting 6m0s for pods matching "app=catalog-operator" in namespace "olm" ...

                                                
                                                
=== CONT  TestAddons/parallel/Olm
helpers_test.go:335: "catalog-operator-7544db6ccd-fqq8w" [28420aff-6fde-4477-a2bd-c8d7b0de9607] Running

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:486: (dbg) TestAddons/parallel/Olm: app=catalog-operator healthy within 5.012673311s

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:489: (dbg) TestAddons/parallel/Olm: waiting 6m0s for pods matching "app=olm-operator" in namespace "olm" ...

                                                
                                                
=== CONT  TestAddons/parallel/Olm
helpers_test.go:335: "olm-operator-79b67c565d-h5r5t" [3c5a139e-e845-4393-8bd2-1c1715c72769] Running

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:489: (dbg) TestAddons/parallel/Olm: app=olm-operator healthy within 5.009954873s
addons_test.go:492: (dbg) TestAddons/parallel/Olm: waiting 6m0s for pods matching "app=packageserver" in namespace "olm" ...
helpers_test.go:335: "packageserver-f7569bd78-4zkdc" [b6a8afef-d715-4e5e-b865-dc9f9452346b] Running
helpers_test.go:335: "packageserver-f7569bd78-krfql" [5ba641f8-9f94-40e5-b01d-248370b674ca] Running

                                                
                                                
=== CONT  TestAddons/parallel/Olm
helpers_test.go:335: "packageserver-f7569bd78-4zkdc" [b6a8afef-d715-4e5e-b865-dc9f9452346b] Running
helpers_test.go:335: "packageserver-f7569bd78-krfql" [5ba641f8-9f94-40e5-b01d-248370b674ca] Running
helpers_test.go:335: "packageserver-f7569bd78-4zkdc" [b6a8afef-d715-4e5e-b865-dc9f9452346b] Running
helpers_test.go:335: "packageserver-f7569bd78-krfql" [5ba641f8-9f94-40e5-b01d-248370b674ca] Running
helpers_test.go:335: "packageserver-f7569bd78-4zkdc" [b6a8afef-d715-4e5e-b865-dc9f9452346b] Running
helpers_test.go:335: "packageserver-f7569bd78-krfql" [5ba641f8-9f94-40e5-b01d-248370b674ca] Running
helpers_test.go:335: "packageserver-f7569bd78-4zkdc" [b6a8afef-d715-4e5e-b865-dc9f9452346b] Running
helpers_test.go:335: "packageserver-f7569bd78-krfql" [5ba641f8-9f94-40e5-b01d-248370b674ca] Running
helpers_test.go:335: "packageserver-f7569bd78-4zkdc" [b6a8afef-d715-4e5e-b865-dc9f9452346b] Running
addons_test.go:492: (dbg) TestAddons/parallel/Olm: app=packageserver healthy within 5.009038402s
addons_test.go:495: (dbg) TestAddons/parallel/Olm: waiting 6m0s for pods matching "olm.catalogSource=operatorhubio-catalog" in namespace "olm" ...
helpers_test.go:335: "operatorhubio-catalog-6bg9l" [b0e355fa-da0d-4e27-a2f4-9c64ce19df57] Running
addons_test.go:495: (dbg) TestAddons/parallel/Olm: olm.catalogSource=operatorhubio-catalog healthy within 5.007338538s
addons_test.go:500: (dbg) Run:  kubectl --context addons-20210526204012-510955 create -f testdata/etcd.yaml
addons_test.go:507: (dbg) Run:  kubectl --context addons-20210526204012-510955 get csv -n my-etcd
addons_test.go:512: kubectl --context addons-20210526204012-510955 get csv -n my-etcd: unexpected stderr: No resources found in my-etcd namespace.
addons_test.go:507: (dbg) Run:  kubectl --context addons-20210526204012-510955 get csv -n my-etcd
addons_test.go:512: kubectl --context addons-20210526204012-510955 get csv -n my-etcd: unexpected stderr: No resources found in my-etcd namespace.
addons_test.go:507: (dbg) Run:  kubectl --context addons-20210526204012-510955 get csv -n my-etcd
addons_test.go:512: kubectl --context addons-20210526204012-510955 get csv -n my-etcd: unexpected stderr: No resources found in my-etcd namespace.
addons_test.go:507: (dbg) Run:  kubectl --context addons-20210526204012-510955 get csv -n my-etcd
addons_test.go:512: kubectl --context addons-20210526204012-510955 get csv -n my-etcd: unexpected stderr: No resources found in my-etcd namespace.
addons_test.go:507: (dbg) Run:  kubectl --context addons-20210526204012-510955 get csv -n my-etcd
addons_test.go:507: (dbg) Run:  kubectl --context addons-20210526204012-510955 get csv -n my-etcd

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:507: (dbg) Run:  kubectl --context addons-20210526204012-510955 get csv -n my-etcd
addons_test.go:507: (dbg) Run:  kubectl --context addons-20210526204012-510955 get csv -n my-etcd
addons_test.go:507: (dbg) Run:  kubectl --context addons-20210526204012-510955 get csv -n my-etcd
addons_test.go:507: (dbg) Run:  kubectl --context addons-20210526204012-510955 get csv -n my-etcd
addons_test.go:507: (dbg) Run:  kubectl --context addons-20210526204012-510955 get csv -n my-etcd

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:507: (dbg) Run:  kubectl --context addons-20210526204012-510955 get csv -n my-etcd

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:507: (dbg) Run:  kubectl --context addons-20210526204012-510955 get csv -n my-etcd

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:507: (dbg) Run:  kubectl --context addons-20210526204012-510955 get csv -n my-etcd

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:507: (dbg) Run:  kubectl --context addons-20210526204012-510955 get csv -n my-etcd

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:507: (dbg) Run:  kubectl --context addons-20210526204012-510955 get csv -n my-etcd

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:507: (dbg) Run:  kubectl --context addons-20210526204012-510955 get csv -n my-etcd

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:507: (dbg) Run:  kubectl --context addons-20210526204012-510955 get csv -n my-etcd

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:507: (dbg) Run:  kubectl --context addons-20210526204012-510955 get csv -n my-etcd
addons_test.go:522: failed checking operator installed: kubectl --context addons-20210526204012-510955 get csv -n my-etcd stdout = "NAME                  DISPLAY   VERSION   REPLACES              PHASE\netcdoperator.v0.9.4   etcd      0.9.4     etcdoperator.v0.9.2   Failed\n", want "Succeeded"
helpers_test.go:218: -----------------------post-mortem--------------------------------
helpers_test.go:235: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-20210526204012-510955 -n addons-20210526204012-510955
helpers_test.go:235: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p addons-20210526204012-510955 -n addons-20210526204012-510955: exit status 3 (308.209902ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0526 21:05:28.907372  514898 status.go:374] failed to get storage capacity of /var: sh -c "df -h /var | awk 'NR==2{print $5}'": Process exited with status 127
	stdout:
	
	stderr:
	sh: /bin/awk: No such file or directory
	E0526 21:05:28.907399  514898 status.go:247] status error: sh -c "df -h /var | awk 'NR==2{print $5}'": Process exited with status 127
	stdout:
	
	stderr:
	sh: /bin/awk: No such file or directory

                                                
                                                
** /stderr **
helpers_test.go:235: status error: exit status 3 (may be ok)
helpers_test.go:237: "addons-20210526204012-510955" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestAddons/parallel/Olm (669.07s)

                                                
                                    
x
+
TestAddons/parallel/CSI (720.86s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:537: failed waiting for csi-hostpath-driver pods to stabilize: timed out waiting for the condition
addons_test.go:539: csi-hostpath-driver pods stabilized in 6m0.017466542s
addons_test.go:542: (dbg) Run:  kubectl --context addons-20210526204012-510955 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:547: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210526204012-510955 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:548: failed waiting for PVC hpvc: timed out waiting for the condition
helpers_test.go:218: -----------------------post-mortem--------------------------------
helpers_test.go:235: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-20210526204012-510955 -n addons-20210526204012-510955
helpers_test.go:235: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p addons-20210526204012-510955 -n addons-20210526204012-510955: exit status 3 (321.826388ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0526 21:07:32.861936  516212 status.go:374] failed to get storage capacity of /var: sh -c "df -h /var | awk 'NR==2{print $5}'": Process exited with status 127
	stdout:
	
	stderr:
	sh: /bin/awk: No such file or directory
	E0526 21:07:32.861962  516212 status.go:247] status error: sh -c "df -h /var | awk 'NR==2{print $5}'": Process exited with status 127
	stdout:
	
	stderr:
	sh: /bin/awk: No such file or directory

                                                
                                                
** /stderr **
helpers_test.go:235: status error: exit status 3 (may be ok)
helpers_test.go:237: "addons-20210526204012-510955" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestAddons/parallel/CSI (720.86s)

                                                
                                    
x
+
TestAddons/parallel/GCPAuth (481.72s)

                                                
                                                
=== RUN   TestAddons/parallel/GCPAuth
=== PAUSE TestAddons/parallel/GCPAuth

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
addons_test.go:631: (dbg) Run:  kubectl --context addons-20210526204012-510955 create -f testdata/busybox.yaml
addons_test.go:631: (dbg) Done: kubectl --context addons-20210526204012-510955 create -f testdata/busybox.yaml: (1.246745042s)
addons_test.go:637: (dbg) TestAddons/parallel/GCPAuth: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:335: "busybox" [207310fb-c7b2-4674-aff8-05e10b8e7021] Pending
helpers_test.go:335: "busybox" [207310fb-c7b2-4674-aff8-05e10b8e7021] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
addons_test.go:637: ***** TestAddons/parallel/GCPAuth: pod "integration-test=busybox" failed to start within 8m0s: timed out waiting for the condition ****
addons_test.go:637: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-20210526204012-510955 -n addons-20210526204012-510955
addons_test.go:637: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-20210526204012-510955 -n addons-20210526204012-510955: exit status 3 (226.845554ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0526 21:02:27.012685  512922 status.go:374] failed to get storage capacity of /var: sh -c "df -h /var | awk 'NR==2{print $5}'": Process exited with status 127
	stdout:
	
	stderr:
	sh: /bin/awk: No such file or directory
	E0526 21:02:27.012706  512922 status.go:247] status error: sh -c "df -h /var | awk 'NR==2{print $5}'": Process exited with status 127
	stdout:
	
	stderr:
	sh: /bin/awk: No such file or directory

                                                
                                                
** /stderr **
addons_test.go:637: status error: exit status 3 (may be ok)
addons_test.go:637: "addons-20210526204012-510955" apiserver is not running, skipping kubectl commands (state="Nonexistent")
addons_test.go:639: wait: integration-test=busybox within 8m0s: timed out waiting for the condition
helpers_test.go:218: -----------------------post-mortem--------------------------------
helpers_test.go:235: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-20210526204012-510955 -n addons-20210526204012-510955
helpers_test.go:235: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p addons-20210526204012-510955 -n addons-20210526204012-510955: exit status 3 (228.099909ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0526 21:02:27.241373  512953 status.go:374] failed to get storage capacity of /var: sh -c "df -h /var | awk 'NR==2{print $5}'": Process exited with status 127
	stdout:
	
	stderr:
	sh: /bin/awk: No such file or directory
	E0526 21:02:27.241389  512953 status.go:247] status error: sh -c "df -h /var | awk 'NR==2{print $5}'": Process exited with status 127
	stdout:
	
	stderr:
	sh: /bin/awk: No such file or directory

                                                
                                                
** /stderr **
helpers_test.go:235: status error: exit status 3 (may be ok)
helpers_test.go:237: "addons-20210526204012-510955" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestAddons/parallel/GCPAuth (481.72s)

                                                
                                    
x
+
TestForceSystemdFlag (126.97s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-20210526215127-510955 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p force-systemd-flag-20210526215127-510955 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: exit status 90 (2m5.23156212s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-20210526215127-510955] minikube v1.20.0 on Debian 9.13 (kvm/amd64)
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/kubeconfig
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube
	  - MINIKUBE_LOCATION=11504
	* Using the kvm2 driver based on user configuration
	* Starting control plane node force-systemd-flag-20210526215127-510955 in cluster force-systemd-flag-20210526215127-510955
	* Creating kvm2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0526 21:51:27.194779  557773 out.go:291] Setting OutFile to fd 1 ...
	I0526 21:51:27.194858  557773 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0526 21:51:27.194864  557773 out.go:304] Setting ErrFile to fd 2...
	I0526 21:51:27.194869  557773 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0526 21:51:27.195006  557773 root.go:316] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/bin
	I0526 21:51:27.195364  557773 out.go:298] Setting JSON to false
	I0526 21:51:27.245331  557773 start.go:110] hostinfo: {"hostname":"debian-jenkins-agent-4","uptime":20050,"bootTime":1622045838,"procs":184,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-15-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0526 21:51:27.245422  557773 start.go:120] virtualization: kvm guest
	I0526 21:51:27.248368  557773 out.go:170] * [force-systemd-flag-20210526215127-510955] minikube v1.20.0 on Debian 9.13 (kvm/amd64)
	I0526 21:51:27.250282  557773 out.go:170]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/kubeconfig
	I0526 21:51:27.251967  557773 out.go:170]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0526 21:51:27.253829  557773 out.go:170]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube
	I0526 21:51:27.255831  557773 out.go:170]   - MINIKUBE_LOCATION=11504
	I0526 21:51:27.256399  557773 driver.go:331] Setting default libvirt URI to qemu:///system
	I0526 21:51:27.292358  557773 out.go:170] * Using the kvm2 driver based on user configuration
	I0526 21:51:27.292389  557773 start.go:278] selected driver: kvm2
	I0526 21:51:27.292397  557773 start.go:751] validating driver "kvm2" against <nil>
	I0526 21:51:27.292416  557773 start.go:762] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0526 21:51:27.293618  557773 install.go:51] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0526 21:51:27.293811  557773 install.go:116] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0526 21:51:27.305454  557773 install.go:136] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.20.0
	I0526 21:51:27.305556  557773 start_flags.go:259] no existing cluster config was found, will generate one from the flags 
	I0526 21:51:27.305713  557773 start_flags.go:638] Wait components to verify : map[apiserver:true system_pods:true]
	I0526 21:51:27.305746  557773 cni.go:93] Creating CNI manager for ""
	I0526 21:51:27.305760  557773 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	I0526 21:51:27.305769  557773 start_flags.go:268] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0526 21:51:27.305794  557773 start_flags.go:273] config:
	{Name:force-systemd-flag-20210526215127-510955 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.22-1620785771-11384@sha256:f5844fe35994179bbad8dda27d4912304a2fedccdf0bf93ce8b2ec2b3b83af1c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:force-systemd-flag-20210526215127-510955 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clu
ster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	I0526 21:51:27.305900  557773 iso.go:123] acquiring lock: {Name:mkae6243686e006cb5174618a31875b12ffbed81 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0526 21:51:27.308367  557773 out.go:170] * Starting control plane node force-systemd-flag-20210526215127-510955 in cluster force-systemd-flag-20210526215127-510955
	I0526 21:51:27.308395  557773 preload.go:98] Checking if preload exists for k8s version v1.20.2 and runtime containerd
	I0526 21:51:27.308424  557773 preload.go:106] Found local preload: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.20.2-containerd-overlay2-amd64.tar.lz4
	I0526 21:51:27.308444  557773 cache.go:54] Caching tarball of preloaded images
	I0526 21:51:27.308570  557773 preload.go:143] Found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.20.2-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0526 21:51:27.308590  557773 cache.go:57] Finished verifying existence of preloaded tar for  v1.20.2 on containerd
	I0526 21:51:27.308708  557773 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/force-systemd-flag-20210526215127-510955/config.json ...
	I0526 21:51:27.308747  557773 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/force-systemd-flag-20210526215127-510955/config.json: {Name:mkc96156ba373cedf721646bc8e5a624ebce23d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0526 21:51:27.308946  557773 cache.go:191] Successfully downloaded all kic artifacts
	I0526 21:51:27.308979  557773 start.go:313] acquiring machines lock for force-systemd-flag-20210526215127-510955: {Name:mk9b6c43d31e9eaa4b66476ed1274ba5b188c66b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0526 21:51:27.309033  557773 start.go:317] acquired machines lock for "force-systemd-flag-20210526215127-510955" in 37.475µs
	I0526 21:51:27.309057  557773 start.go:89] Provisioning new machine with config: &{Name:force-systemd-flag-20210526215127-510955 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.20.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.22-1620785771-11384@sha256:f5844fe35994179bbad8dda27d4912304a2fedccdf0bf93ce8b2ec2b3b83af1c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20
.2 ClusterName:force-systemd-flag-20210526215127-510955 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false} &{Name: IP: Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}
	I0526 21:51:27.309140  557773 start.go:126] createHost starting for "" (driver="kvm2")
	I0526 21:51:27.311343  557773 out.go:197] * Creating kvm2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0526 21:51:27.311482  557773 main.go:128] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0526 21:51:27.311531  557773 main.go:128] libmachine: Launching plugin server for driver kvm2
	I0526 21:51:27.321710  557773 main.go:128] libmachine: Plugin server listening at address 127.0.0.1:46803
	I0526 21:51:27.322205  557773 main.go:128] libmachine: () Calling .GetVersion
	I0526 21:51:27.322830  557773 main.go:128] libmachine: Using API Version  1
	I0526 21:51:27.322856  557773 main.go:128] libmachine: () Calling .SetConfigRaw
	I0526 21:51:27.323272  557773 main.go:128] libmachine: () Calling .GetMachineName
	I0526 21:51:27.323459  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) Calling .GetMachineName
	I0526 21:51:27.323591  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) Calling .DriverName
	I0526 21:51:27.323772  557773 start.go:160] libmachine.API.Create for "force-systemd-flag-20210526215127-510955" (driver="kvm2")
	I0526 21:51:27.323799  557773 client.go:168] LocalClient.Create starting
	I0526 21:51:27.323836  557773 main.go:128] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/ca.pem
	I0526 21:51:27.323899  557773 main.go:128] libmachine: Decoding PEM data...
	I0526 21:51:27.323926  557773 main.go:128] libmachine: Parsing certificate...
	I0526 21:51:27.324053  557773 main.go:128] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/cert.pem
	I0526 21:51:27.324081  557773 main.go:128] libmachine: Decoding PEM data...
	I0526 21:51:27.324098  557773 main.go:128] libmachine: Parsing certificate...
	I0526 21:51:27.324161  557773 main.go:128] libmachine: Running pre-create checks...
	I0526 21:51:27.324178  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) Calling .PreCreateCheck
	I0526 21:51:27.324457  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) Calling .GetConfigRaw
	I0526 21:51:27.324836  557773 main.go:128] libmachine: Creating machine...
	I0526 21:51:27.324856  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) Calling .Create
	I0526 21:51:27.325007  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) Creating KVM machine...
	I0526 21:51:27.328193  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) DBG | found existing default KVM network
	I0526 21:51:27.329892  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) DBG | I0526 21:51:27.329724  557796 network.go:263] reserving subnet 192.168.39.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.39.0:0xc0001865d8] misses:0}
	I0526 21:51:27.329927  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) DBG | I0526 21:51:27.329834  557796 network.go:210] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0526 21:51:27.364066  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) DBG | trying to create private KVM network mk-force-systemd-flag-20210526215127-510955 192.168.39.0/24...
	I0526 21:51:27.609312  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) DBG | private KVM network mk-force-systemd-flag-20210526215127-510955 192.168.39.0/24 created
	I0526 21:51:27.609354  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) Setting up store path in /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/force-systemd-flag-20210526215127-510955 ...
	I0526 21:51:27.609374  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) DBG | I0526 21:51:27.609270  557796 common.go:101] Making disk image using store path: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube
	I0526 21:51:27.609405  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) Building disk image from file:///home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/iso/minikube-v1.20.0.iso
	I0526 21:51:27.609427  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) Downloading /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/iso/minikube-v1.20.0.iso...
	I0526 21:51:27.813506  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) DBG | I0526 21:51:27.813392  557796 common.go:108] Creating ssh key: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/force-systemd-flag-20210526215127-510955/id_rsa...
	I0526 21:51:27.869880  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) DBG | I0526 21:51:27.869777  557796 common.go:114] Creating raw disk image: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/force-systemd-flag-20210526215127-510955/force-systemd-flag-20210526215127-510955.rawdisk...
	I0526 21:51:27.869912  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) DBG | Writing magic tar header
	I0526 21:51:27.869932  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) DBG | Writing SSH key tar header
	I0526 21:51:27.870016  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) DBG | I0526 21:51:27.869930  557796 common.go:128] Fixing permissions on /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/force-systemd-flag-20210526215127-510955 ...
	I0526 21:51:27.870075  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/force-systemd-flag-20210526215127-510955
	I0526 21:51:27.870124  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/force-systemd-flag-20210526215127-510955 (perms=drwx------)
	I0526 21:51:27.870140  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines
	I0526 21:51:27.870153  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines (perms=drwxr-xr-x)
	I0526 21:51:27.870170  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube (perms=drwxr-xr-x)
	I0526 21:51:27.870190  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1 (perms=drwxr-xr-x)
	I0526 21:51:27.870216  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube
	I0526 21:51:27.870235  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxr-xr-x)
	I0526 21:51:27.870248  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0526 21:51:27.870254  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) Creating domain...
	I0526 21:51:27.870268  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1
	I0526 21:51:27.870275  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0526 21:51:27.870300  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) DBG | Checking permissions on dir: /home/jenkins
	I0526 21:51:27.870342  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) DBG | Checking permissions on dir: /home
	I0526 21:51:27.870364  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) DBG | Skipping /home - not owner
	I0526 21:51:27.897826  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) DBG | domain force-systemd-flag-20210526215127-510955 has defined MAC address 52:54:00:fb:8f:76 in network default
	I0526 21:51:27.898527  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) Ensuring networks are active...
	I0526 21:51:27.898557  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) DBG | domain force-systemd-flag-20210526215127-510955 has defined MAC address 52:54:00:60:7b:2f in network mk-force-systemd-flag-20210526215127-510955
	I0526 21:51:27.900604  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) Ensuring network default is active
	I0526 21:51:27.900933  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) Ensuring network mk-force-systemd-flag-20210526215127-510955 is active
	I0526 21:51:27.901513  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) Getting domain xml...
	I0526 21:51:27.903530  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) Creating domain...
	I0526 21:51:28.334037  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) Waiting to get IP...
	I0526 21:51:28.335044  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) DBG | domain force-systemd-flag-20210526215127-510955 has defined MAC address 52:54:00:60:7b:2f in network mk-force-systemd-flag-20210526215127-510955
	I0526 21:51:28.335526  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) DBG | unable to find current IP address of domain force-systemd-flag-20210526215127-510955 in network mk-force-systemd-flag-20210526215127-510955
	I0526 21:51:28.335562  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) DBG | I0526 21:51:28.335468  557796 retry.go:31] will retry after 263.082536ms: waiting for machine to come up
	I0526 21:51:28.599795  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) DBG | domain force-systemd-flag-20210526215127-510955 has defined MAC address 52:54:00:60:7b:2f in network mk-force-systemd-flag-20210526215127-510955
	I0526 21:51:28.600274  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) DBG | unable to find current IP address of domain force-systemd-flag-20210526215127-510955 in network mk-force-systemd-flag-20210526215127-510955
	I0526 21:51:28.600309  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) DBG | I0526 21:51:28.600213  557796 retry.go:31] will retry after 381.329545ms: waiting for machine to come up
	I0526 21:51:28.982728  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) DBG | domain force-systemd-flag-20210526215127-510955 has defined MAC address 52:54:00:60:7b:2f in network mk-force-systemd-flag-20210526215127-510955
	I0526 21:51:28.983195  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) DBG | unable to find current IP address of domain force-systemd-flag-20210526215127-510955 in network mk-force-systemd-flag-20210526215127-510955
	I0526 21:51:28.983229  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) DBG | I0526 21:51:28.983154  557796 retry.go:31] will retry after 422.765636ms: waiting for machine to come up
	I0526 21:51:29.407859  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) DBG | domain force-systemd-flag-20210526215127-510955 has defined MAC address 52:54:00:60:7b:2f in network mk-force-systemd-flag-20210526215127-510955
	I0526 21:51:29.408442  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) DBG | unable to find current IP address of domain force-systemd-flag-20210526215127-510955 in network mk-force-systemd-flag-20210526215127-510955
	I0526 21:51:29.408484  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) DBG | I0526 21:51:29.408376  557796 retry.go:31] will retry after 473.074753ms: waiting for machine to come up
	I0526 21:51:29.882905  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) DBG | domain force-systemd-flag-20210526215127-510955 has defined MAC address 52:54:00:60:7b:2f in network mk-force-systemd-flag-20210526215127-510955
	I0526 21:51:29.883361  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) DBG | unable to find current IP address of domain force-systemd-flag-20210526215127-510955 in network mk-force-systemd-flag-20210526215127-510955
	I0526 21:51:29.883394  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) DBG | I0526 21:51:29.883308  557796 retry.go:31] will retry after 587.352751ms: waiting for machine to come up
	I0526 21:51:30.472182  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) DBG | domain force-systemd-flag-20210526215127-510955 has defined MAC address 52:54:00:60:7b:2f in network mk-force-systemd-flag-20210526215127-510955
	I0526 21:51:30.472711  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) DBG | unable to find current IP address of domain force-systemd-flag-20210526215127-510955 in network mk-force-systemd-flag-20210526215127-510955
	I0526 21:51:30.472746  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) DBG | I0526 21:51:30.472667  557796 retry.go:31] will retry after 834.206799ms: waiting for machine to come up
	I0526 21:51:31.308518  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) DBG | domain force-systemd-flag-20210526215127-510955 has defined MAC address 52:54:00:60:7b:2f in network mk-force-systemd-flag-20210526215127-510955
	I0526 21:51:31.308981  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) DBG | unable to find current IP address of domain force-systemd-flag-20210526215127-510955 in network mk-force-systemd-flag-20210526215127-510955
	I0526 21:51:31.309007  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) DBG | I0526 21:51:31.308937  557796 retry.go:31] will retry after 746.553905ms: waiting for machine to come up
	I0526 21:51:32.057328  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) DBG | domain force-systemd-flag-20210526215127-510955 has defined MAC address 52:54:00:60:7b:2f in network mk-force-systemd-flag-20210526215127-510955
	I0526 21:51:32.057736  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) DBG | unable to find current IP address of domain force-systemd-flag-20210526215127-510955 in network mk-force-systemd-flag-20210526215127-510955
	I0526 21:51:32.057768  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) DBG | I0526 21:51:32.057696  557796 retry.go:31] will retry after 987.362415ms: waiting for machine to come up
	I0526 21:51:33.046136  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) DBG | domain force-systemd-flag-20210526215127-510955 has defined MAC address 52:54:00:60:7b:2f in network mk-force-systemd-flag-20210526215127-510955
	I0526 21:51:33.046556  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) DBG | unable to find current IP address of domain force-systemd-flag-20210526215127-510955 in network mk-force-systemd-flag-20210526215127-510955
	I0526 21:51:33.046590  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) DBG | I0526 21:51:33.046495  557796 retry.go:31] will retry after 1.189835008s: waiting for machine to come up
	I0526 21:51:34.237944  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) DBG | domain force-systemd-flag-20210526215127-510955 has defined MAC address 52:54:00:60:7b:2f in network mk-force-systemd-flag-20210526215127-510955
	I0526 21:51:34.238278  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) DBG | unable to find current IP address of domain force-systemd-flag-20210526215127-510955 in network mk-force-systemd-flag-20210526215127-510955
	I0526 21:51:34.238343  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) DBG | I0526 21:51:34.238259  557796 retry.go:31] will retry after 1.677229867s: waiting for machine to come up
	I0526 21:51:35.917862  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) DBG | domain force-systemd-flag-20210526215127-510955 has defined MAC address 52:54:00:60:7b:2f in network mk-force-systemd-flag-20210526215127-510955
	I0526 21:51:35.918355  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) DBG | unable to find current IP address of domain force-systemd-flag-20210526215127-510955 in network mk-force-systemd-flag-20210526215127-510955
	I0526 21:51:35.918392  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) DBG | I0526 21:51:35.918295  557796 retry.go:31] will retry after 2.346016261s: waiting for machine to come up
	I0526 21:51:38.265337  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) DBG | domain force-systemd-flag-20210526215127-510955 has defined MAC address 52:54:00:60:7b:2f in network mk-force-systemd-flag-20210526215127-510955
	I0526 21:51:38.265771  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) DBG | unable to find current IP address of domain force-systemd-flag-20210526215127-510955 in network mk-force-systemd-flag-20210526215127-510955
	I0526 21:51:38.265806  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) DBG | I0526 21:51:38.265703  557796 retry.go:31] will retry after 3.36678925s: waiting for machine to come up
	I0526 21:51:41.635585  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) DBG | domain force-systemd-flag-20210526215127-510955 has defined MAC address 52:54:00:60:7b:2f in network mk-force-systemd-flag-20210526215127-510955
	I0526 21:51:41.635985  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) DBG | unable to find current IP address of domain force-systemd-flag-20210526215127-510955 in network mk-force-systemd-flag-20210526215127-510955
	I0526 21:51:41.636011  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) DBG | I0526 21:51:41.635944  557796 retry.go:31] will retry after 3.11822781s: waiting for machine to come up
	I0526 21:51:44.757537  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) DBG | domain force-systemd-flag-20210526215127-510955 has defined MAC address 52:54:00:60:7b:2f in network mk-force-systemd-flag-20210526215127-510955
	I0526 21:51:44.758075  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) Found IP for machine: 192.168.39.154
	I0526 21:51:44.758106  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) Reserving static IP address...
	I0526 21:51:44.758126  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) DBG | domain force-systemd-flag-20210526215127-510955 has current primary IP address 192.168.39.154 and MAC address 52:54:00:60:7b:2f in network mk-force-systemd-flag-20210526215127-510955
	I0526 21:51:44.758505  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) DBG | unable to find host DHCP lease matching {name: "force-systemd-flag-20210526215127-510955", mac: "52:54:00:60:7b:2f", ip: "192.168.39.154"} in network mk-force-systemd-flag-20210526215127-510955
	I0526 21:51:44.807757  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) DBG | Getting to WaitForSSH function...
	I0526 21:51:44.807795  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) Reserved static IP address: 192.168.39.154
	I0526 21:51:44.807809  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) Waiting for SSH to be available...
	I0526 21:51:44.812441  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) DBG | domain force-systemd-flag-20210526215127-510955 has defined MAC address 52:54:00:60:7b:2f in network mk-force-systemd-flag-20210526215127-510955
	I0526 21:51:44.812768  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:7b:2f", ip: ""} in network mk-force-systemd-flag-20210526215127-510955: {Iface:virbr1 ExpiryTime:2021-05-26 22:51:43 +0000 UTC Type:0 Mac:52:54:00:60:7b:2f Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:minikube Clientid:01:52:54:00:60:7b:2f}
	I0526 21:51:44.812812  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) DBG | domain force-systemd-flag-20210526215127-510955 has defined IP address 192.168.39.154 and MAC address 52:54:00:60:7b:2f in network mk-force-systemd-flag-20210526215127-510955
	I0526 21:51:44.812996  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) DBG | Using SSH client type: external
	I0526 21:51:44.813045  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) DBG | Using SSH private key: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/force-systemd-flag-20210526215127-510955/id_rsa (-rw-------)
	I0526 21:51:44.813098  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.154 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/force-systemd-flag-20210526215127-510955/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0526 21:51:44.813119  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) DBG | About to run SSH command:
	I0526 21:51:44.813135  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) DBG | exit 0
	I0526 21:51:44.944169  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) DBG | SSH cmd err, output: <nil>: 
	I0526 21:51:44.944493  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) KVM machine creation complete!
	I0526 21:51:44.944559  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) Calling .GetConfigRaw
	I0526 21:51:44.945124  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) Calling .DriverName
	I0526 21:51:44.945335  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) Calling .DriverName
	I0526 21:51:44.945502  557773 main.go:128] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0526 21:51:44.945529  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) Calling .GetState
	I0526 21:51:44.948021  557773 main.go:128] libmachine: Detecting operating system of created instance...
	I0526 21:51:44.948035  557773 main.go:128] libmachine: Waiting for SSH to be available...
	I0526 21:51:44.948041  557773 main.go:128] libmachine: Getting to WaitForSSH function...
	I0526 21:51:44.948048  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) Calling .GetSSHHostname
	I0526 21:51:44.952662  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) DBG | domain force-systemd-flag-20210526215127-510955 has defined MAC address 52:54:00:60:7b:2f in network mk-force-systemd-flag-20210526215127-510955
	I0526 21:51:44.953068  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:7b:2f", ip: ""} in network mk-force-systemd-flag-20210526215127-510955: {Iface:virbr1 ExpiryTime:2021-05-26 22:51:43 +0000 UTC Type:0 Mac:52:54:00:60:7b:2f Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:force-systemd-flag-20210526215127-510955 Clientid:01:52:54:00:60:7b:2f}
	I0526 21:51:44.953101  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) DBG | domain force-systemd-flag-20210526215127-510955 has defined IP address 192.168.39.154 and MAC address 52:54:00:60:7b:2f in network mk-force-systemd-flag-20210526215127-510955
	I0526 21:51:44.953228  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) Calling .GetSSHPort
	I0526 21:51:44.953407  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) Calling .GetSSHKeyPath
	I0526 21:51:44.953579  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) Calling .GetSSHKeyPath
	I0526 21:51:44.953725  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) Calling .GetSSHUsername
	I0526 21:51:44.953891  557773 main.go:128] libmachine: Using SSH client type: native
	I0526 21:51:44.954118  557773 main.go:128] libmachine: &{{{<nil> 0 [] [] []} docker [0x802c00] 0x802bc0 <nil>  [] 0s} 192.168.39.154 22 <nil> <nil>}
	I0526 21:51:44.954136  557773 main.go:128] libmachine: About to run SSH command:
	exit 0
	I0526 21:51:45.084686  557773 main.go:128] libmachine: SSH cmd err, output: <nil>: 
	I0526 21:51:45.084713  557773 main.go:128] libmachine: Detecting the provisioner...
	I0526 21:51:45.084726  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) Calling .GetSSHHostname
	I0526 21:51:45.090492  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) DBG | domain force-systemd-flag-20210526215127-510955 has defined MAC address 52:54:00:60:7b:2f in network mk-force-systemd-flag-20210526215127-510955
	I0526 21:51:45.090949  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:7b:2f", ip: ""} in network mk-force-systemd-flag-20210526215127-510955: {Iface:virbr1 ExpiryTime:2021-05-26 22:51:43 +0000 UTC Type:0 Mac:52:54:00:60:7b:2f Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:force-systemd-flag-20210526215127-510955 Clientid:01:52:54:00:60:7b:2f}
	I0526 21:51:45.090990  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) DBG | domain force-systemd-flag-20210526215127-510955 has defined IP address 192.168.39.154 and MAC address 52:54:00:60:7b:2f in network mk-force-systemd-flag-20210526215127-510955
	I0526 21:51:45.091120  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) Calling .GetSSHPort
	I0526 21:51:45.091334  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) Calling .GetSSHKeyPath
	I0526 21:51:45.091542  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) Calling .GetSSHKeyPath
	I0526 21:51:45.091686  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) Calling .GetSSHUsername
	I0526 21:51:45.091864  557773 main.go:128] libmachine: Using SSH client type: native
	I0526 21:51:45.092044  557773 main.go:128] libmachine: &{{{<nil> 0 [] [] []} docker [0x802c00] 0x802bc0 <nil>  [] 0s} 192.168.39.154 22 <nil> <nil>}
	I0526 21:51:45.092063  557773 main.go:128] libmachine: About to run SSH command:
	cat /etc/os-release
	I0526 21:51:45.214725  557773 main.go:128] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2020.02.12
	ID=buildroot
	VERSION_ID=2020.02.12
	PRETTY_NAME="Buildroot 2020.02.12"
	
	I0526 21:51:45.214784  557773 main.go:128] libmachine: found compatible host: buildroot
	I0526 21:51:45.214791  557773 main.go:128] libmachine: Provisioning with buildroot...
	I0526 21:51:45.214800  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) Calling .GetMachineName
	I0526 21:51:45.215069  557773 buildroot.go:166] provisioning hostname "force-systemd-flag-20210526215127-510955"
	I0526 21:51:45.215096  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) Calling .GetMachineName
	I0526 21:51:45.215307  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) Calling .GetSSHHostname
	I0526 21:51:45.220409  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) DBG | domain force-systemd-flag-20210526215127-510955 has defined MAC address 52:54:00:60:7b:2f in network mk-force-systemd-flag-20210526215127-510955
	I0526 21:51:45.220738  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:7b:2f", ip: ""} in network mk-force-systemd-flag-20210526215127-510955: {Iface:virbr1 ExpiryTime:2021-05-26 22:51:43 +0000 UTC Type:0 Mac:52:54:00:60:7b:2f Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:force-systemd-flag-20210526215127-510955 Clientid:01:52:54:00:60:7b:2f}
	I0526 21:51:45.220777  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) DBG | domain force-systemd-flag-20210526215127-510955 has defined IP address 192.168.39.154 and MAC address 52:54:00:60:7b:2f in network mk-force-systemd-flag-20210526215127-510955
	I0526 21:51:45.220855  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) Calling .GetSSHPort
	I0526 21:51:45.221135  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) Calling .GetSSHKeyPath
	I0526 21:51:45.221312  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) Calling .GetSSHKeyPath
	I0526 21:51:45.221463  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) Calling .GetSSHUsername
	I0526 21:51:45.221679  557773 main.go:128] libmachine: Using SSH client type: native
	I0526 21:51:45.221869  557773 main.go:128] libmachine: &{{{<nil> 0 [] [] []} docker [0x802c00] 0x802bc0 <nil>  [] 0s} 192.168.39.154 22 <nil> <nil>}
	I0526 21:51:45.221913  557773 main.go:128] libmachine: About to run SSH command:
	sudo hostname force-systemd-flag-20210526215127-510955 && echo "force-systemd-flag-20210526215127-510955" | sudo tee /etc/hostname
	I0526 21:51:45.355987  557773 main.go:128] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-20210526215127-510955
	
	I0526 21:51:45.356025  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) Calling .GetSSHHostname
	I0526 21:51:45.361600  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) DBG | domain force-systemd-flag-20210526215127-510955 has defined MAC address 52:54:00:60:7b:2f in network mk-force-systemd-flag-20210526215127-510955
	I0526 21:51:45.361922  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:7b:2f", ip: ""} in network mk-force-systemd-flag-20210526215127-510955: {Iface:virbr1 ExpiryTime:2021-05-26 22:51:43 +0000 UTC Type:0 Mac:52:54:00:60:7b:2f Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:force-systemd-flag-20210526215127-510955 Clientid:01:52:54:00:60:7b:2f}
	I0526 21:51:45.361974  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) DBG | domain force-systemd-flag-20210526215127-510955 has defined IP address 192.168.39.154 and MAC address 52:54:00:60:7b:2f in network mk-force-systemd-flag-20210526215127-510955
	I0526 21:51:45.362237  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) Calling .GetSSHPort
	I0526 21:51:45.362404  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) Calling .GetSSHKeyPath
	I0526 21:51:45.362596  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) Calling .GetSSHKeyPath
	I0526 21:51:45.362711  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) Calling .GetSSHUsername
	I0526 21:51:45.362939  557773 main.go:128] libmachine: Using SSH client type: native
	I0526 21:51:45.363111  557773 main.go:128] libmachine: &{{{<nil> 0 [] [] []} docker [0x802c00] 0x802bc0 <nil>  [] 0s} 192.168.39.154 22 <nil> <nil>}
	I0526 21:51:45.363143  557773 main.go:128] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-flag-20210526215127-510955' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-flag-20210526215127-510955/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-flag-20210526215127-510955' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0526 21:51:45.494810  557773 main.go:128] libmachine: SSH cmd err, output: <nil>: 
	I0526 21:51:45.494848  557773 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikub
e/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube}
	I0526 21:51:45.494901  557773 buildroot.go:174] setting up certificates
	I0526 21:51:45.494912  557773 provision.go:83] configureAuth start
	I0526 21:51:45.494930  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) Calling .GetMachineName
	I0526 21:51:45.495219  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) Calling .GetIP
	I0526 21:51:45.500659  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) DBG | domain force-systemd-flag-20210526215127-510955 has defined MAC address 52:54:00:60:7b:2f in network mk-force-systemd-flag-20210526215127-510955
	I0526 21:51:45.501036  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:7b:2f", ip: ""} in network mk-force-systemd-flag-20210526215127-510955: {Iface:virbr1 ExpiryTime:2021-05-26 22:51:43 +0000 UTC Type:0 Mac:52:54:00:60:7b:2f Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:force-systemd-flag-20210526215127-510955 Clientid:01:52:54:00:60:7b:2f}
	I0526 21:51:45.501068  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) DBG | domain force-systemd-flag-20210526215127-510955 has defined IP address 192.168.39.154 and MAC address 52:54:00:60:7b:2f in network mk-force-systemd-flag-20210526215127-510955
	I0526 21:51:45.501265  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) Calling .GetSSHHostname
	I0526 21:51:45.506982  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) DBG | domain force-systemd-flag-20210526215127-510955 has defined MAC address 52:54:00:60:7b:2f in network mk-force-systemd-flag-20210526215127-510955
	I0526 21:51:45.507271  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:7b:2f", ip: ""} in network mk-force-systemd-flag-20210526215127-510955: {Iface:virbr1 ExpiryTime:2021-05-26 22:51:43 +0000 UTC Type:0 Mac:52:54:00:60:7b:2f Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:force-systemd-flag-20210526215127-510955 Clientid:01:52:54:00:60:7b:2f}
	I0526 21:51:45.507313  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) DBG | domain force-systemd-flag-20210526215127-510955 has defined IP address 192.168.39.154 and MAC address 52:54:00:60:7b:2f in network mk-force-systemd-flag-20210526215127-510955
	I0526 21:51:45.507437  557773 provision.go:137] copyHostCerts
	I0526 21:51:45.507472  557773 vm_assets.go:98] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cert.pem
	I0526 21:51:45.507517  557773 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cert.pem, removing ...
	I0526 21:51:45.507532  557773 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cert.pem
	I0526 21:51:45.507591  557773 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cert.pem (1123 bytes)
	I0526 21:51:45.507676  557773 vm_assets.go:98] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/key.pem
	I0526 21:51:45.507695  557773 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/key.pem, removing ...
	I0526 21:51:45.507703  557773 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/key.pem
	I0526 21:51:45.507722  557773 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/key.pem (1679 bytes)
	I0526 21:51:45.507806  557773 vm_assets.go:98] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/ca.pem
	I0526 21:51:45.507828  557773 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/ca.pem, removing ...
	I0526 21:51:45.507839  557773 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/ca.pem
	I0526 21:51:45.507860  557773 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/ca.pem (1078 bytes)
	I0526 21:51:45.507917  557773 provision.go:111] generating server cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/ca-key.pem org=jenkins.force-systemd-flag-20210526215127-510955 san=[192.168.39.154 192.168.39.154 localhost 127.0.0.1 minikube force-systemd-flag-20210526215127-510955]
	I0526 21:51:45.762812  557773 provision.go:171] copyRemoteCerts
	I0526 21:51:45.762877  557773 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0526 21:51:45.762902  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) Calling .GetSSHHostname
	I0526 21:51:45.768474  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) DBG | domain force-systemd-flag-20210526215127-510955 has defined MAC address 52:54:00:60:7b:2f in network mk-force-systemd-flag-20210526215127-510955
	I0526 21:51:45.768795  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:7b:2f", ip: ""} in network mk-force-systemd-flag-20210526215127-510955: {Iface:virbr1 ExpiryTime:2021-05-26 22:51:43 +0000 UTC Type:0 Mac:52:54:00:60:7b:2f Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:force-systemd-flag-20210526215127-510955 Clientid:01:52:54:00:60:7b:2f}
	I0526 21:51:45.768836  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) DBG | domain force-systemd-flag-20210526215127-510955 has defined IP address 192.168.39.154 and MAC address 52:54:00:60:7b:2f in network mk-force-systemd-flag-20210526215127-510955
	I0526 21:51:45.768957  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) Calling .GetSSHPort
	I0526 21:51:45.769163  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) Calling .GetSSHKeyPath
	I0526 21:51:45.769319  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) Calling .GetSSHUsername
	I0526 21:51:45.769464  557773 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/force-systemd-flag-20210526215127-510955/id_rsa Username:docker}
	I0526 21:51:45.856228  557773 vm_assets.go:98] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0526 21:51:45.856277  557773 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0526 21:51:45.873497  557773 vm_assets.go:98] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0526 21:51:45.873546  557773 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0526 21:51:45.890859  557773 vm_assets.go:98] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0526 21:51:45.890897  557773 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/server.pem --> /etc/docker/server.pem (1289 bytes)
	I0526 21:51:45.907341  557773 provision.go:86] duration metric: configureAuth took 412.41489ms
	I0526 21:51:45.907365  557773 buildroot.go:189] setting minikube options for container-runtime
	I0526 21:51:45.907557  557773 main.go:128] libmachine: Checking connection to Docker...
	I0526 21:51:45.907578  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) Calling .GetURL
	I0526 21:51:45.910180  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) DBG | Using libvirt version 3000000
	I0526 21:51:45.915271  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) DBG | domain force-systemd-flag-20210526215127-510955 has defined MAC address 52:54:00:60:7b:2f in network mk-force-systemd-flag-20210526215127-510955
	I0526 21:51:45.915610  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:7b:2f", ip: ""} in network mk-force-systemd-flag-20210526215127-510955: {Iface:virbr1 ExpiryTime:2021-05-26 22:51:43 +0000 UTC Type:0 Mac:52:54:00:60:7b:2f Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:force-systemd-flag-20210526215127-510955 Clientid:01:52:54:00:60:7b:2f}
	I0526 21:51:45.915642  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) DBG | domain force-systemd-flag-20210526215127-510955 has defined IP address 192.168.39.154 and MAC address 52:54:00:60:7b:2f in network mk-force-systemd-flag-20210526215127-510955
	I0526 21:51:45.915781  557773 main.go:128] libmachine: Docker is up and running!
	I0526 21:51:45.915794  557773 main.go:128] libmachine: Reticulating splines...
	I0526 21:51:45.915801  557773 client.go:171] LocalClient.Create took 18.591992666s
	I0526 21:51:45.915820  557773 start.go:168] duration metric: libmachine.API.Create for "force-systemd-flag-20210526215127-510955" took 18.592050222s
	I0526 21:51:45.915833  557773 start.go:267] post-start starting for "force-systemd-flag-20210526215127-510955" (driver="kvm2")
	I0526 21:51:45.915846  557773 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0526 21:51:45.915867  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) Calling .DriverName
	I0526 21:51:45.916094  557773 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0526 21:51:45.916126  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) Calling .GetSSHHostname
	I0526 21:51:45.920852  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) DBG | domain force-systemd-flag-20210526215127-510955 has defined MAC address 52:54:00:60:7b:2f in network mk-force-systemd-flag-20210526215127-510955
	I0526 21:51:45.921182  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:7b:2f", ip: ""} in network mk-force-systemd-flag-20210526215127-510955: {Iface:virbr1 ExpiryTime:2021-05-26 22:51:43 +0000 UTC Type:0 Mac:52:54:00:60:7b:2f Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:force-systemd-flag-20210526215127-510955 Clientid:01:52:54:00:60:7b:2f}
	I0526 21:51:45.921223  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) DBG | domain force-systemd-flag-20210526215127-510955 has defined IP address 192.168.39.154 and MAC address 52:54:00:60:7b:2f in network mk-force-systemd-flag-20210526215127-510955
	I0526 21:51:45.921298  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) Calling .GetSSHPort
	I0526 21:51:45.921472  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) Calling .GetSSHKeyPath
	I0526 21:51:45.921613  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) Calling .GetSSHUsername
	I0526 21:51:45.921727  557773 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/force-systemd-flag-20210526215127-510955/id_rsa Username:docker}
	I0526 21:51:46.007700  557773 ssh_runner.go:149] Run: cat /etc/os-release
	I0526 21:51:46.012150  557773 info.go:137] Remote host: Buildroot 2020.02.12
	I0526 21:51:46.012174  557773 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/addons for local assets ...
	I0526 21:51:46.012228  557773 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/files for local assets ...
	I0526 21:51:46.012381  557773 start.go:270] post-start completed in 96.53597ms
	I0526 21:51:46.012414  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) Calling .GetConfigRaw
	I0526 21:51:46.013024  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) Calling .GetIP
	I0526 21:51:46.017959  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) DBG | domain force-systemd-flag-20210526215127-510955 has defined MAC address 52:54:00:60:7b:2f in network mk-force-systemd-flag-20210526215127-510955
	I0526 21:51:46.018258  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:7b:2f", ip: ""} in network mk-force-systemd-flag-20210526215127-510955: {Iface:virbr1 ExpiryTime:2021-05-26 22:51:43 +0000 UTC Type:0 Mac:52:54:00:60:7b:2f Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:force-systemd-flag-20210526215127-510955 Clientid:01:52:54:00:60:7b:2f}
	I0526 21:51:46.018295  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) DBG | domain force-systemd-flag-20210526215127-510955 has defined IP address 192.168.39.154 and MAC address 52:54:00:60:7b:2f in network mk-force-systemd-flag-20210526215127-510955
	I0526 21:51:46.018503  557773 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/force-systemd-flag-20210526215127-510955/config.json ...
	I0526 21:51:46.018667  557773 start.go:129] duration metric: createHost completed in 18.709516215s
	I0526 21:51:46.018682  557773 start.go:80] releasing machines lock for "force-systemd-flag-20210526215127-510955", held for 18.709636291s
	I0526 21:51:46.018727  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) Calling .DriverName
	I0526 21:51:46.018912  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) Calling .GetIP
	I0526 21:51:46.023751  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) DBG | domain force-systemd-flag-20210526215127-510955 has defined MAC address 52:54:00:60:7b:2f in network mk-force-systemd-flag-20210526215127-510955
	I0526 21:51:46.024096  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:7b:2f", ip: ""} in network mk-force-systemd-flag-20210526215127-510955: {Iface:virbr1 ExpiryTime:2021-05-26 22:51:43 +0000 UTC Type:0 Mac:52:54:00:60:7b:2f Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:force-systemd-flag-20210526215127-510955 Clientid:01:52:54:00:60:7b:2f}
	I0526 21:51:46.024136  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) DBG | domain force-systemd-flag-20210526215127-510955 has defined IP address 192.168.39.154 and MAC address 52:54:00:60:7b:2f in network mk-force-systemd-flag-20210526215127-510955
	I0526 21:51:46.024232  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) Calling .DriverName
	I0526 21:51:46.024414  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) Calling .DriverName
	I0526 21:51:46.024887  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) Calling .DriverName
	I0526 21:51:46.025126  557773 ssh_runner.go:149] Run: systemctl --version
	I0526 21:51:46.025151  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) Calling .GetSSHHostname
	I0526 21:51:46.025185  557773 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0526 21:51:46.025230  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) Calling .GetSSHHostname
	I0526 21:51:46.030463  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) DBG | domain force-systemd-flag-20210526215127-510955 has defined MAC address 52:54:00:60:7b:2f in network mk-force-systemd-flag-20210526215127-510955
	I0526 21:51:46.030813  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:7b:2f", ip: ""} in network mk-force-systemd-flag-20210526215127-510955: {Iface:virbr1 ExpiryTime:2021-05-26 22:51:43 +0000 UTC Type:0 Mac:52:54:00:60:7b:2f Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:force-systemd-flag-20210526215127-510955 Clientid:01:52:54:00:60:7b:2f}
	I0526 21:51:46.030858  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) DBG | domain force-systemd-flag-20210526215127-510955 has defined IP address 192.168.39.154 and MAC address 52:54:00:60:7b:2f in network mk-force-systemd-flag-20210526215127-510955
	I0526 21:51:46.030914  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) Calling .GetSSHPort
	I0526 21:51:46.031092  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) Calling .GetSSHKeyPath
	I0526 21:51:46.031248  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) Calling .GetSSHUsername
	I0526 21:51:46.031388  557773 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/force-systemd-flag-20210526215127-510955/id_rsa Username:docker}
	I0526 21:51:46.031652  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) DBG | domain force-systemd-flag-20210526215127-510955 has defined MAC address 52:54:00:60:7b:2f in network mk-force-systemd-flag-20210526215127-510955
	I0526 21:51:46.032025  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:7b:2f", ip: ""} in network mk-force-systemd-flag-20210526215127-510955: {Iface:virbr1 ExpiryTime:2021-05-26 22:51:43 +0000 UTC Type:0 Mac:52:54:00:60:7b:2f Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:force-systemd-flag-20210526215127-510955 Clientid:01:52:54:00:60:7b:2f}
	I0526 21:51:46.032062  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) DBG | domain force-systemd-flag-20210526215127-510955 has defined IP address 192.168.39.154 and MAC address 52:54:00:60:7b:2f in network mk-force-systemd-flag-20210526215127-510955
	I0526 21:51:46.032182  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) Calling .GetSSHPort
	I0526 21:51:46.032334  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) Calling .GetSSHKeyPath
	I0526 21:51:46.032493  557773 main.go:128] libmachine: (force-systemd-flag-20210526215127-510955) Calling .GetSSHUsername
	I0526 21:51:46.032640  557773 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/force-systemd-flag-20210526215127-510955/id_rsa Username:docker}
	I0526 21:51:46.138345  557773 preload.go:98] Checking if preload exists for k8s version v1.20.2 and runtime containerd
	I0526 21:51:46.138395  557773 preload.go:106] Found local preload: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.20.2-containerd-overlay2-amd64.tar.lz4
	I0526 21:51:46.138497  557773 ssh_runner.go:149] Run: sudo crictl images --output json
	I0526 21:51:50.131791  557773 ssh_runner.go:189] Completed: sudo crictl images --output json: (3.993262872s)
	I0526 21:51:50.131964  557773 containerd.go:566] couldn't find preloaded image for "k8s.gcr.io/kube-apiserver:v1.20.2". assuming images are not preloaded.
	I0526 21:51:50.132024  557773 ssh_runner.go:149] Run: which lz4
	I0526 21:51:50.138209  557773 vm_assets.go:98] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.20.2-containerd-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0526 21:51:50.138324  557773 ssh_runner.go:149] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0526 21:51:50.143402  557773 ssh_runner.go:306] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	I0526 21:51:50.143438  557773 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.20.2-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (953722271 bytes)
	I0526 21:51:54.367681  557773 containerd.go:503] Took 4.229408 seconds to copy over tarball
	I0526 21:51:54.367777  557773 ssh_runner.go:149] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0526 21:52:02.106241  557773 ssh_runner.go:189] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (7.738432126s)
	I0526 21:52:02.106285  557773 containerd.go:510] Took 7.738551 seconds t extract the tarball
	I0526 21:52:02.106305  557773 ssh_runner.go:100] rm: /preloaded.tar.lz4
	I0526 21:52:02.169234  557773 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0526 21:52:05.680962  557773 ssh_runner.go:149] Run: sudo systemctl restart containerd
	I0526 21:52:05.739754  557773 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0526 21:52:05.752909  557773 ssh_runner.go:149] Run: sudo systemctl stop -f crio
	I0526 21:52:05.822770  557773 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0526 21:52:05.839962  557773 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0526 21:52:05.854406  557773 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0526 21:52:05.871245  557773 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %s "cm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKCltncnBjXQogIGFkZHJlc3MgPSAiL3J1bi9jb250YWluZXJkL2NvbnRhaW5lcmQuc29jayIKICB1aWQgPSAwCiAgZ2lkID0gMAogIG1heF9yZWN2X21lc3NhZ2Vfc2l6ZSA9IDE2Nzc3MjE2CiAgbWF4X3NlbmRfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKCltkZWJ1Z10KICBhZGRyZXNzID0gIiIKICB1aWQgPSAwCiAgZ2lkID0gMAogIGxldmVsID0gIiIKClttZXRyaWNzXQogIGFkZHJlc3MgPSAiIgogIGdycGNfaGlzdG9ncmFtID0gZmFsc2UKCltjZ3JvdXBdCiAgcGF0aCA9ICIiCgpbcGx1Z2luc10KICBbcGx1Z2lucy5jZ3JvdXBzXQogICAgbm9fcHJvbWV0aGV1cyA9IGZhbHNlCiAgW3BsdWdpbnMuY3JpXQogICAgc3RyZWFtX3NlcnZlcl9hZGRyZXNzID0gIiIKICAgIHN0cmVhbV9zZXJ2ZXJfcG9ydCA9ICIxMDAxMCIKICAgIGVuYWJsZV9zZWxpbnV4ID0gZmFsc2UKICAgIHNhbmRib3hfaW1hZ2UgPSAiazhzLmdjci5pby9wYXVzZTozLjIiCiAgICBzdGF0c19jb2xsZWN0X3BlcmlvZCA9IDEwCiAgICBzeXN0ZW1kX2Nncm91cCA9IHRydWUKICAgIGVuYWJsZV90bHNfc3RyZWFtaW5nID0gZmFsc2UKICAgIG1heF9jb250YWluZXJfbG9nX2xpbmVfc2l6ZSA9IDE2Mzg
0CiAgICBbcGx1Z2lucy5jcmkuY29udGFpbmVyZF0KICAgICAgc25hcHNob3R0ZXIgPSAib3ZlcmxheWZzIgogICAgICBbcGx1Z2lucy5jcmkuY29udGFpbmVyZC5kZWZhdWx0X3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgICBbcGx1Z2lucy5jcmkuY29udGFpbmVyZC5kZWZhdWx0X3J1bnRpbWUub3B0aW9uc10KICAgICAgICAgIE5vUGl2b3RSb290ID0gdHJ1ZQogICAgICBbcGx1Z2lucy5jcmkuY29udGFpbmVyZC51bnRydXN0ZWRfd29ya2xvYWRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiIgogICAgICAgIHJ1bnRpbWVfZW5naW5lID0gIiIKICAgICAgICBydW50aW1lX3Jvb3QgPSAiIgogICAgW3BsdWdpbnMuY3JpLmNuaV0KICAgICAgYmluX2RpciA9ICIvb3B0L2NuaS9iaW4iCiAgICAgIGNvbmZfZGlyID0gIi9ldGMvY25pL25ldC5kIgogICAgICBjb25mX3RlbXBsYXRlID0gIiIKICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeV0KICAgICAgW3BsdWdpbnMuY3JpLnJlZ2lzdHJ5Lm1pcnJvcnNdCiAgICAgICAgW3BsdWdpbnMuY3JpLnJlZ2lzdHJ5Lm1pcnJvcnMuImRvY2tlci5pbyJdCiAgICAgICAgICBlbmRwb2ludCA9IFsiaHR0cHM6Ly9yZWdpc3RyeS0xLmRvY2tlci5pbyJdCiAgICAgICAgW3BsdWdpbnMuZGlmZi1zZXJ2aWNlXQogICAgZGVmYXVsdCA9IFsid2Fsa2luZyJdCiAgW3BsdWdpbnMuc2NoZWR1bGVyXQogICAgcGF1c2VfdGhyZXNob2xkID0gMC4wMgo
gICAgZGVsZXRpb25fdGhyZXNob2xkID0gMAogICAgbXV0YXRpb25fdGhyZXNob2xkID0gMTAwCiAgICBzY2hlZHVsZV9kZWxheSA9ICIwcyIKICAgIHN0YXJ0dXBfZGVsYXkgPSAiMTAwbXMiCg==" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0526 21:52:05.888727  557773 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0526 21:52:05.895258  557773 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0526 21:52:05.895307  557773 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0526 21:52:05.913247  557773 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0526 21:52:05.921199  557773 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0526 21:52:06.064126  557773 ssh_runner.go:149] Run: sudo systemctl restart containerd
	I0526 21:52:06.107114  557773 start.go:376] Will wait 60s for socket path /run/containerd/containerd.sock
	I0526 21:52:06.107201  557773 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0526 21:52:06.112313  557773 retry.go:31] will retry after 1.104660288s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/run/containerd/containerd.sock': No such file or directory
	I0526 21:52:07.217777  557773 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0526 21:52:07.225112  557773 start.go:401] Will wait 60s for crictl version
	I0526 21:52:07.225174  557773 ssh_runner.go:149] Run: sudo crictl version
	I0526 21:52:07.244427  557773 retry.go:31] will retry after 14.405090881s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2021-05-26T21:52:07Z" level=fatal msg="getting the runtime version: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	I0526 21:52:21.653655  557773 ssh_runner.go:149] Run: sudo crictl version
	I0526 21:52:21.681175  557773 retry.go:31] will retry after 17.468400798s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2021-05-26T21:52:21Z" level=fatal msg="getting the runtime version: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	I0526 21:52:39.151053  557773 ssh_runner.go:149] Run: sudo crictl version
	I0526 21:52:39.170394  557773 retry.go:31] will retry after 21.098569212s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2021-05-26T21:52:39Z" level=fatal msg="getting the runtime version: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	I0526 21:53:00.269229  557773 ssh_runner.go:149] Run: sudo crictl version
	I0526 21:53:00.291064  557773 retry.go:31] will retry after 31.206515526s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2021-05-26T21:53:00Z" level=fatal msg="getting the runtime version: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	I0526 21:53:31.501917  557773 ssh_runner.go:149] Run: sudo crictl version
	I0526 21:53:32.326089  557773 out.go:170] 
	W0526 21:53:32.326268  557773 out.go:235] X Exiting due to RUNTIME_ENABLE: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2021-05-26T21:53:32Z" level=fatal msg="getting the runtime version: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	
	X Exiting due to RUNTIME_ENABLE: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2021-05-26T21:53:32Z" level=fatal msg="getting the runtime version: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	
	W0526 21:53:32.326298  557773 out.go:235] * 
	* 
	W0526 21:53:32.329067  557773 out.go:235] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	W0526 21:53:32.329092  557773 out.go:235] │                                                                                                                                                             │
	│                                                                                                                                                             │
	W0526 21:53:32.329101  557773 out.go:235] │    * If the above advice does not help, please let us know:                                                                                                 │
	│    * If the above advice does not help, please let us know:                                                                                                 │
	W0526 21:53:32.329107  557773 out.go:235] │      https://github.com/kubernetes/minikube/issues/new/choose                                                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                                                               │
	W0526 21:53:32.329113  557773 out.go:235] │                                                                                                                                                             │
	│                                                                                                                                                             │
	W0526 21:53:32.329125  557773 out.go:235] │    * Please attach the following file to the GitHub issue:                                                                                                  │
	│    * Please attach the following file to the GitHub issue:                                                                                                  │
	W0526 21:53:32.329135  557773 out.go:235] │    * - /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/logs/lastStart.txt    │
	│    * - /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/logs/lastStart.txt    │
	W0526 21:53:32.329143  557773 out.go:235] │                                                                                                                                                             │
	│                                                                                                                                                             │
	W0526 21:53:32.329149  557773 out.go:235] ╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	W0526 21:53:32.329155  557773 out.go:235] 
	
	I0526 21:53:32.357059  557773 out.go:170] 

                                                
                                                
** /stderr **
docker_test.go:87: failed to start minikube with args: "out/minikube-linux-amd64 start -p force-systemd-flag-20210526215127-510955 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd" : exit status 90
docker_test.go:113: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-20210526215127-510955 ssh "cat /etc/containerd/config.toml"
docker_test.go:98: *** TestForceSystemdFlag FAILED at 2021-05-26 21:53:32.614656874 +0000 UTC m=+4447.732205350
helpers_test.go:218: -----------------------post-mortem--------------------------------
helpers_test.go:235: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p force-systemd-flag-20210526215127-510955 -n force-systemd-flag-20210526215127-510955
helpers_test.go:235: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p force-systemd-flag-20210526215127-510955 -n force-systemd-flag-20210526215127-510955: exit status 6 (254.664291ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0526 21:53:32.857913  559033 status.go:413] kubeconfig endpoint: extract IP: "force-systemd-flag-20210526215127-510955" does not appear in /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:235: status error: exit status 6 (may be ok)
helpers_test.go:237: "force-systemd-flag-20210526215127-510955" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:171: Cleaning up "force-systemd-flag-20210526215127-510955" profile ...
helpers_test.go:174: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-20210526215127-510955
helpers_test.go:174: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-20210526215127-510955: (1.227952868s)
--- FAIL: TestForceSystemdFlag (126.97s)

                                                
                                    
x
+
TestForceSystemdEnv (145.66s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-20210526214750-510955 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:136: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p force-systemd-env-20210526214750-510955 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: exit status 90 (2m24.111430557s)

                                                
                                                
-- stdout --
	* [force-systemd-env-20210526214750-510955] minikube v1.20.0 on Debian 9.13 (kvm/amd64)
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/kubeconfig
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube
	  - MINIKUBE_LOCATION=11504
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the kvm2 driver based on user configuration
	* Starting control plane node force-systemd-env-20210526214750-510955 in cluster force-systemd-env-20210526214750-510955
	* Creating kvm2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0526 21:47:50.748086  555713 out.go:291] Setting OutFile to fd 1 ...
	I0526 21:47:50.748328  555713 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0526 21:47:50.748341  555713 out.go:304] Setting ErrFile to fd 2...
	I0526 21:47:50.748347  555713 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0526 21:47:50.748494  555713 root.go:316] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/bin
	I0526 21:47:50.748832  555713 out.go:298] Setting JSON to false
	I0526 21:47:50.799219  555713 start.go:110] hostinfo: {"hostname":"debian-jenkins-agent-4","uptime":19833,"bootTime":1622045838,"procs":147,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-15-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0526 21:47:50.799328  555713 start.go:120] virtualization: kvm guest
	I0526 21:47:50.801752  555713 out.go:170] * [force-systemd-env-20210526214750-510955] minikube v1.20.0 on Debian 9.13 (kvm/amd64)
	I0526 21:47:50.803567  555713 out.go:170]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/kubeconfig
	I0526 21:47:50.801939  555713 notify.go:169] Checking for updates...
	I0526 21:47:50.805099  555713 out.go:170]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0526 21:47:50.806961  555713 out.go:170]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube
	I0526 21:47:50.808609  555713 out.go:170]   - MINIKUBE_LOCATION=11504
	I0526 21:47:50.810454  555713 out.go:170]   - MINIKUBE_FORCE_SYSTEMD=true
	I0526 21:47:50.810683  555713 driver.go:331] Setting default libvirt URI to qemu:///system
	I0526 21:47:50.844038  555713 out.go:170] * Using the kvm2 driver based on user configuration
	I0526 21:47:50.844064  555713 start.go:278] selected driver: kvm2
	I0526 21:47:50.844070  555713 start.go:751] validating driver "kvm2" against <nil>
	I0526 21:47:50.844086  555713 start.go:762] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0526 21:47:50.844768  555713 install.go:51] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0526 21:47:50.854109  555713 install.go:116] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0526 21:47:50.867773  555713 install.go:136] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.20.0
	I0526 21:47:50.867839  555713 start_flags.go:259] no existing cluster config was found, will generate one from the flags 
	I0526 21:47:50.867996  555713 start_flags.go:638] Wait components to verify : map[apiserver:true system_pods:true]
	I0526 21:47:50.868014  555713 cni.go:93] Creating CNI manager for ""
	I0526 21:47:50.868023  555713 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	I0526 21:47:50.868033  555713 start_flags.go:268] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0526 21:47:50.868047  555713 start_flags.go:273] config:
	{Name:force-systemd-env-20210526214750-510955 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.22-1620785771-11384@sha256:f5844fe35994179bbad8dda27d4912304a2fedccdf0bf93ce8b2ec2b3b83af1c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:force-systemd-env-20210526214750-510955 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clust
er.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	I0526 21:47:50.868149  555713 iso.go:123] acquiring lock: {Name:mkae6243686e006cb5174618a31875b12ffbed81 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0526 21:47:50.870018  555713 out.go:170] * Starting control plane node force-systemd-env-20210526214750-510955 in cluster force-systemd-env-20210526214750-510955
	I0526 21:47:50.870055  555713 preload.go:98] Checking if preload exists for k8s version v1.20.2 and runtime containerd
	I0526 21:47:50.870089  555713 preload.go:106] Found local preload: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.20.2-containerd-overlay2-amd64.tar.lz4
	I0526 21:47:50.870114  555713 cache.go:54] Caching tarball of preloaded images
	I0526 21:47:50.870210  555713 preload.go:143] Found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.20.2-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0526 21:47:50.870232  555713 cache.go:57] Finished verifying existence of preloaded tar for  v1.20.2 on containerd
	I0526 21:47:50.873248  555713 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/force-systemd-env-20210526214750-510955/config.json ...
	I0526 21:47:50.873293  555713 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/force-systemd-env-20210526214750-510955/config.json: {Name:mk5e0528dd1b3edc12d9c0dea213cad9e370d37c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0526 21:47:50.873607  555713 cache.go:191] Successfully downloaded all kic artifacts
	I0526 21:47:50.873683  555713 start.go:313] acquiring machines lock for force-systemd-env-20210526214750-510955: {Name:mk9b6c43d31e9eaa4b66476ed1274ba5b188c66b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0526 21:48:06.381810  555713 start.go:317] acquired machines lock for "force-systemd-env-20210526214750-510955" in 15.508100142s
	I0526 21:48:06.381858  555713 start.go:89] Provisioning new machine with config: &{Name:force-systemd-env-20210526214750-510955 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.20.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.22-1620785771-11384@sha256:f5844fe35994179bbad8dda27d4912304a2fedccdf0bf93ce8b2ec2b3b83af1c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.
2 ClusterName:force-systemd-env-20210526214750-510955 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false} &{Name: IP: Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}
	I0526 21:48:06.381941  555713 start.go:126] createHost starting for "" (driver="kvm2")
	I0526 21:48:06.384181  555713 out.go:197] * Creating kvm2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0526 21:48:06.384367  555713 main.go:128] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0526 21:48:06.384427  555713 main.go:128] libmachine: Launching plugin server for driver kvm2
	I0526 21:48:06.397732  555713 main.go:128] libmachine: Plugin server listening at address 127.0.0.1:46127
	I0526 21:48:06.398146  555713 main.go:128] libmachine: () Calling .GetVersion
	I0526 21:48:06.398837  555713 main.go:128] libmachine: Using API Version  1
	I0526 21:48:06.398854  555713 main.go:128] libmachine: () Calling .SetConfigRaw
	I0526 21:48:06.399250  555713 main.go:128] libmachine: () Calling .GetMachineName
	I0526 21:48:06.399412  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) Calling .GetMachineName
	I0526 21:48:06.399536  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) Calling .DriverName
	I0526 21:48:06.399668  555713 start.go:160] libmachine.API.Create for "force-systemd-env-20210526214750-510955" (driver="kvm2")
	I0526 21:48:06.399698  555713 client.go:168] LocalClient.Create starting
	I0526 21:48:06.399733  555713 main.go:128] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/ca.pem
	I0526 21:48:06.399794  555713 main.go:128] libmachine: Decoding PEM data...
	I0526 21:48:06.399820  555713 main.go:128] libmachine: Parsing certificate...
	I0526 21:48:06.399944  555713 main.go:128] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/cert.pem
	I0526 21:48:06.399969  555713 main.go:128] libmachine: Decoding PEM data...
	I0526 21:48:06.399988  555713 main.go:128] libmachine: Parsing certificate...
	I0526 21:48:06.400042  555713 main.go:128] libmachine: Running pre-create checks...
	I0526 21:48:06.400057  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) Calling .PreCreateCheck
	I0526 21:48:06.400372  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) Calling .GetConfigRaw
	I0526 21:48:06.400786  555713 main.go:128] libmachine: Creating machine...
	I0526 21:48:06.400804  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) Calling .Create
	I0526 21:48:06.400932  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) Creating KVM machine...
	I0526 21:48:06.403499  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) DBG | found existing default KVM network
	I0526 21:48:06.405083  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) DBG | I0526 21:48:06.404935  556162 network.go:215] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:eb:d9:d7}}
	I0526 21:48:06.406211  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) DBG | I0526 21:48:06.406123  556162 network.go:215] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:11:37:01}}
	I0526 21:48:06.407265  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) DBG | I0526 21:48:06.407179  556162 network.go:263] reserving subnet 192.168.61.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.61.0:0xc00018a2d8] misses:0}
	I0526 21:48:06.407307  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) DBG | I0526 21:48:06.407221  556162 network.go:210] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0526 21:48:06.431190  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) DBG | trying to create private KVM network mk-force-systemd-env-20210526214750-510955 192.168.61.0/24...
	I0526 21:48:06.697299  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) DBG | private KVM network mk-force-systemd-env-20210526214750-510955 192.168.61.0/24 created
	I0526 21:48:06.697348  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) DBG | I0526 21:48:06.697280  556162 common.go:101] Making disk image using store path: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube
	I0526 21:48:06.697367  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) Setting up store path in /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/force-systemd-env-20210526214750-510955 ...
	I0526 21:48:06.697417  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) Building disk image from file:///home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/iso/minikube-v1.20.0.iso
	I0526 21:48:06.697441  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) Downloading /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/iso/minikube-v1.20.0.iso...
	I0526 21:48:06.893374  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) DBG | I0526 21:48:06.893251  556162 common.go:108] Creating ssh key: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/force-systemd-env-20210526214750-510955/id_rsa...
	I0526 21:48:07.093592  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) DBG | I0526 21:48:07.093475  556162 common.go:114] Creating raw disk image: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/force-systemd-env-20210526214750-510955/force-systemd-env-20210526214750-510955.rawdisk...
	I0526 21:48:07.093631  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) DBG | Writing magic tar header
	I0526 21:48:07.093655  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) DBG | Writing SSH key tar header
	I0526 21:48:07.093678  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) DBG | I0526 21:48:07.093588  556162 common.go:128] Fixing permissions on /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/force-systemd-env-20210526214750-510955 ...
	I0526 21:48:07.093710  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/force-systemd-env-20210526214750-510955
	I0526 21:48:07.093748  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/force-systemd-env-20210526214750-510955 (perms=drwx------)
	I0526 21:48:07.093769  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines
	I0526 21:48:07.093797  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines (perms=drwxr-xr-x)
	I0526 21:48:07.093819  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube
	I0526 21:48:07.093841  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1
	I0526 21:48:07.093866  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube (perms=drwxr-xr-x)
	I0526 21:48:07.093887  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0526 21:48:07.093910  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1 (perms=drwxr-xr-x)
	I0526 21:48:07.093926  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxr-xr-x)
	I0526 21:48:07.093936  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0526 21:48:07.093949  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) Creating domain...
	I0526 21:48:07.093987  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) DBG | Checking permissions on dir: /home/jenkins
	I0526 21:48:07.094013  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) DBG | Checking permissions on dir: /home
	I0526 21:48:07.094034  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) DBG | Skipping /home - not owner
	I0526 21:48:07.122431  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) DBG | domain force-systemd-env-20210526214750-510955 has defined MAC address 52:54:00:b4:cb:94 in network default
	I0526 21:48:07.123030  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) DBG | domain force-systemd-env-20210526214750-510955 has defined MAC address 52:54:00:b1:73:1b in network mk-force-systemd-env-20210526214750-510955
	I0526 21:48:07.123060  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) Ensuring networks are active...
	I0526 21:48:07.124961  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) Ensuring network default is active
	I0526 21:48:07.125249  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) Ensuring network mk-force-systemd-env-20210526214750-510955 is active
	I0526 21:48:07.125743  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) Getting domain xml...
	I0526 21:48:07.127519  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) Creating domain...
	I0526 21:48:07.514730  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) Waiting to get IP...
	I0526 21:48:07.515469  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) DBG | domain force-systemd-env-20210526214750-510955 has defined MAC address 52:54:00:b1:73:1b in network mk-force-systemd-env-20210526214750-510955
	I0526 21:48:07.515960  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) DBG | unable to find current IP address of domain force-systemd-env-20210526214750-510955 in network mk-force-systemd-env-20210526214750-510955
	I0526 21:48:07.515984  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) DBG | I0526 21:48:07.515914  556162 retry.go:31] will retry after 263.082536ms: waiting for machine to come up
	I0526 21:48:07.780287  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) DBG | domain force-systemd-env-20210526214750-510955 has defined MAC address 52:54:00:b1:73:1b in network mk-force-systemd-env-20210526214750-510955
	I0526 21:48:07.780857  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) DBG | unable to find current IP address of domain force-systemd-env-20210526214750-510955 in network mk-force-systemd-env-20210526214750-510955
	I0526 21:48:07.780899  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) DBG | I0526 21:48:07.780769  556162 retry.go:31] will retry after 381.329545ms: waiting for machine to come up
	I0526 21:48:08.163143  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) DBG | domain force-systemd-env-20210526214750-510955 has defined MAC address 52:54:00:b1:73:1b in network mk-force-systemd-env-20210526214750-510955
	I0526 21:48:08.163535  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) DBG | unable to find current IP address of domain force-systemd-env-20210526214750-510955 in network mk-force-systemd-env-20210526214750-510955
	I0526 21:48:08.163573  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) DBG | I0526 21:48:08.163462  556162 retry.go:31] will retry after 422.765636ms: waiting for machine to come up
	I0526 21:48:08.587956  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) DBG | domain force-systemd-env-20210526214750-510955 has defined MAC address 52:54:00:b1:73:1b in network mk-force-systemd-env-20210526214750-510955
	I0526 21:48:08.588326  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) DBG | unable to find current IP address of domain force-systemd-env-20210526214750-510955 in network mk-force-systemd-env-20210526214750-510955
	I0526 21:48:08.588354  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) DBG | I0526 21:48:08.588277  556162 retry.go:31] will retry after 473.074753ms: waiting for machine to come up
	I0526 21:48:09.062672  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) DBG | domain force-systemd-env-20210526214750-510955 has defined MAC address 52:54:00:b1:73:1b in network mk-force-systemd-env-20210526214750-510955
	I0526 21:48:09.063058  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) DBG | unable to find current IP address of domain force-systemd-env-20210526214750-510955 in network mk-force-systemd-env-20210526214750-510955
	I0526 21:48:09.063090  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) DBG | I0526 21:48:09.063012  556162 retry.go:31] will retry after 587.352751ms: waiting for machine to come up
	I0526 21:48:09.651602  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) DBG | domain force-systemd-env-20210526214750-510955 has defined MAC address 52:54:00:b1:73:1b in network mk-force-systemd-env-20210526214750-510955
	I0526 21:48:09.652016  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) DBG | unable to find current IP address of domain force-systemd-env-20210526214750-510955 in network mk-force-systemd-env-20210526214750-510955
	I0526 21:48:09.652041  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) DBG | I0526 21:48:09.651979  556162 retry.go:31] will retry after 834.206799ms: waiting for machine to come up
	I0526 21:48:10.487914  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) DBG | domain force-systemd-env-20210526214750-510955 has defined MAC address 52:54:00:b1:73:1b in network mk-force-systemd-env-20210526214750-510955
	I0526 21:48:10.488459  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) DBG | unable to find current IP address of domain force-systemd-env-20210526214750-510955 in network mk-force-systemd-env-20210526214750-510955
	I0526 21:48:10.488493  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) DBG | I0526 21:48:10.488405  556162 retry.go:31] will retry after 746.553905ms: waiting for machine to come up
	I0526 21:48:11.236317  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) DBG | domain force-systemd-env-20210526214750-510955 has defined MAC address 52:54:00:b1:73:1b in network mk-force-systemd-env-20210526214750-510955
	I0526 21:48:11.236763  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) DBG | unable to find current IP address of domain force-systemd-env-20210526214750-510955 in network mk-force-systemd-env-20210526214750-510955
	I0526 21:48:11.236790  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) DBG | I0526 21:48:11.236710  556162 retry.go:31] will retry after 987.362415ms: waiting for machine to come up
	I0526 21:48:12.225893  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) DBG | domain force-systemd-env-20210526214750-510955 has defined MAC address 52:54:00:b1:73:1b in network mk-force-systemd-env-20210526214750-510955
	I0526 21:48:12.226349  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) DBG | unable to find current IP address of domain force-systemd-env-20210526214750-510955 in network mk-force-systemd-env-20210526214750-510955
	I0526 21:48:12.226376  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) DBG | I0526 21:48:12.226283  556162 retry.go:31] will retry after 1.189835008s: waiting for machine to come up
	I0526 21:48:13.417359  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) DBG | domain force-systemd-env-20210526214750-510955 has defined MAC address 52:54:00:b1:73:1b in network mk-force-systemd-env-20210526214750-510955
	I0526 21:48:13.417822  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) DBG | unable to find current IP address of domain force-systemd-env-20210526214750-510955 in network mk-force-systemd-env-20210526214750-510955
	I0526 21:48:13.417849  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) DBG | I0526 21:48:13.417766  556162 retry.go:31] will retry after 1.677229867s: waiting for machine to come up
	I0526 21:48:15.096767  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) DBG | domain force-systemd-env-20210526214750-510955 has defined MAC address 52:54:00:b1:73:1b in network mk-force-systemd-env-20210526214750-510955
	I0526 21:48:15.097363  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) DBG | unable to find current IP address of domain force-systemd-env-20210526214750-510955 in network mk-force-systemd-env-20210526214750-510955
	I0526 21:48:15.097398  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) DBG | I0526 21:48:15.097303  556162 retry.go:31] will retry after 2.346016261s: waiting for machine to come up
	I0526 21:48:17.445080  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) DBG | domain force-systemd-env-20210526214750-510955 has defined MAC address 52:54:00:b1:73:1b in network mk-force-systemd-env-20210526214750-510955
	I0526 21:48:17.445581  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) DBG | unable to find current IP address of domain force-systemd-env-20210526214750-510955 in network mk-force-systemd-env-20210526214750-510955
	I0526 21:48:17.445614  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) DBG | I0526 21:48:17.445511  556162 retry.go:31] will retry after 3.36678925s: waiting for machine to come up
	I0526 21:48:20.815932  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) DBG | domain force-systemd-env-20210526214750-510955 has defined MAC address 52:54:00:b1:73:1b in network mk-force-systemd-env-20210526214750-510955
	I0526 21:48:22.559612  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) DBG | unable to find current IP address of domain force-systemd-env-20210526214750-510955 in network mk-force-systemd-env-20210526214750-510955
	I0526 21:48:22.559672  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) DBG | I0526 21:48:20.816247  556162 retry.go:31] will retry after 3.11822781s: waiting for machine to come up
	I0526 21:48:23.936288  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) DBG | domain force-systemd-env-20210526214750-510955 has defined MAC address 52:54:00:b1:73:1b in network mk-force-systemd-env-20210526214750-510955
	I0526 21:48:23.936666  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) Found IP for machine: 192.168.61.5
	I0526 21:48:23.936703  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) DBG | domain force-systemd-env-20210526214750-510955 has current primary IP address 192.168.61.5 and MAC address 52:54:00:b1:73:1b in network mk-force-systemd-env-20210526214750-510955
	I0526 21:48:23.936716  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) Reserving static IP address...
	I0526 21:48:23.937044  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) DBG | unable to find host DHCP lease matching {name: "force-systemd-env-20210526214750-510955", mac: "52:54:00:b1:73:1b", ip: "192.168.61.5"} in network mk-force-systemd-env-20210526214750-510955
	I0526 21:48:26.435956  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) DBG | Getting to WaitForSSH function...
	I0526 21:48:26.435994  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) Reserved static IP address: 192.168.61.5
	I0526 21:48:26.436009  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) Waiting for SSH to be available...
	I0526 21:48:26.441784  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) DBG | domain force-systemd-env-20210526214750-510955 has defined MAC address 52:54:00:b1:73:1b in network mk-force-systemd-env-20210526214750-510955
	I0526 21:48:26.442290  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:73:1b", ip: ""} in network mk-force-systemd-env-20210526214750-510955: {Iface:virbr3 ExpiryTime:2021-05-26 22:48:22 +0000 UTC Type:0 Mac:52:54:00:b1:73:1b Iaid: IPaddr:192.168.61.5 Prefix:24 Hostname:minikube Clientid:01:52:54:00:b1:73:1b}
	I0526 21:48:26.442332  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) DBG | domain force-systemd-env-20210526214750-510955 has defined IP address 192.168.61.5 and MAC address 52:54:00:b1:73:1b in network mk-force-systemd-env-20210526214750-510955
	I0526 21:48:26.442632  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) DBG | Using SSH client type: external
	I0526 21:48:26.442676  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) DBG | Using SSH private key: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/force-systemd-env-20210526214750-510955/id_rsa (-rw-------)
	I0526 21:48:26.442722  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.5 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/force-systemd-env-20210526214750-510955/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0526 21:48:26.442759  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) DBG | About to run SSH command:
	I0526 21:48:26.442775  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) DBG | exit 0
	I0526 21:48:26.529184  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) DBG | SSH cmd err, output: exit status 255: 
	I0526 21:48:26.529216  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0526 21:48:26.529225  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) DBG | command : exit 0
	I0526 21:48:26.529232  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) DBG | err     : exit status 255
	I0526 21:48:26.529277  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) DBG | output  : 
	I0526 21:48:29.529797  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) DBG | Getting to WaitForSSH function...
	I0526 21:48:29.536153  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) DBG | domain force-systemd-env-20210526214750-510955 has defined MAC address 52:54:00:b1:73:1b in network mk-force-systemd-env-20210526214750-510955
	I0526 21:48:29.536518  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:73:1b", ip: ""} in network mk-force-systemd-env-20210526214750-510955: {Iface:virbr3 ExpiryTime:2021-05-26 22:48:22 +0000 UTC Type:0 Mac:52:54:00:b1:73:1b Iaid: IPaddr:192.168.61.5 Prefix:24 Hostname:force-systemd-env-20210526214750-510955 Clientid:01:52:54:00:b1:73:1b}
	I0526 21:48:29.536551  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) DBG | domain force-systemd-env-20210526214750-510955 has defined IP address 192.168.61.5 and MAC address 52:54:00:b1:73:1b in network mk-force-systemd-env-20210526214750-510955
	I0526 21:48:29.536691  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) DBG | Using SSH client type: external
	I0526 21:48:29.536728  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) DBG | Using SSH private key: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/force-systemd-env-20210526214750-510955/id_rsa (-rw-------)
	I0526 21:48:29.536781  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.5 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/force-systemd-env-20210526214750-510955/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0526 21:48:29.536800  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) DBG | About to run SSH command:
	I0526 21:48:29.536814  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) DBG | exit 0
	I0526 21:48:29.668936  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) DBG | SSH cmd err, output: <nil>: 
	I0526 21:48:29.669387  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) KVM machine creation complete!
	I0526 21:48:29.669457  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) Calling .GetConfigRaw
	I0526 21:48:29.670101  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) Calling .DriverName
	I0526 21:48:29.670313  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) Calling .DriverName
	I0526 21:48:29.670463  555713 main.go:128] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0526 21:48:29.670482  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) Calling .GetState
	I0526 21:48:29.673587  555713 main.go:128] libmachine: Detecting operating system of created instance...
	I0526 21:48:29.673604  555713 main.go:128] libmachine: Waiting for SSH to be available...
	I0526 21:48:29.673613  555713 main.go:128] libmachine: Getting to WaitForSSH function...
	I0526 21:48:29.673624  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) Calling .GetSSHHostname
	I0526 21:48:29.679317  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) DBG | domain force-systemd-env-20210526214750-510955 has defined MAC address 52:54:00:b1:73:1b in network mk-force-systemd-env-20210526214750-510955
	I0526 21:48:29.679698  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:73:1b", ip: ""} in network mk-force-systemd-env-20210526214750-510955: {Iface:virbr3 ExpiryTime:2021-05-26 22:48:22 +0000 UTC Type:0 Mac:52:54:00:b1:73:1b Iaid: IPaddr:192.168.61.5 Prefix:24 Hostname:force-systemd-env-20210526214750-510955 Clientid:01:52:54:00:b1:73:1b}
	I0526 21:48:29.679730  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) DBG | domain force-systemd-env-20210526214750-510955 has defined IP address 192.168.61.5 and MAC address 52:54:00:b1:73:1b in network mk-force-systemd-env-20210526214750-510955
	I0526 21:48:29.679866  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) Calling .GetSSHPort
	I0526 21:48:29.680027  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) Calling .GetSSHKeyPath
	I0526 21:48:29.680158  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) Calling .GetSSHKeyPath
	I0526 21:48:29.680291  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) Calling .GetSSHUsername
	I0526 21:48:29.680519  555713 main.go:128] libmachine: Using SSH client type: native
	I0526 21:48:29.680744  555713 main.go:128] libmachine: &{{{<nil> 0 [] [] []} docker [0x802c00] 0x802bc0 <nil>  [] 0s} 192.168.61.5 22 <nil> <nil>}
	I0526 21:48:29.680762  555713 main.go:128] libmachine: About to run SSH command:
	exit 0
	I0526 21:48:29.804889  555713 main.go:128] libmachine: SSH cmd err, output: <nil>: 
	I0526 21:48:29.804916  555713 main.go:128] libmachine: Detecting the provisioner...
	I0526 21:48:29.804935  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) Calling .GetSSHHostname
	I0526 21:48:29.811407  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) DBG | domain force-systemd-env-20210526214750-510955 has defined MAC address 52:54:00:b1:73:1b in network mk-force-systemd-env-20210526214750-510955
	I0526 21:48:29.811895  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:73:1b", ip: ""} in network mk-force-systemd-env-20210526214750-510955: {Iface:virbr3 ExpiryTime:2021-05-26 22:48:22 +0000 UTC Type:0 Mac:52:54:00:b1:73:1b Iaid: IPaddr:192.168.61.5 Prefix:24 Hostname:force-systemd-env-20210526214750-510955 Clientid:01:52:54:00:b1:73:1b}
	I0526 21:48:29.811927  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) DBG | domain force-systemd-env-20210526214750-510955 has defined IP address 192.168.61.5 and MAC address 52:54:00:b1:73:1b in network mk-force-systemd-env-20210526214750-510955
	I0526 21:48:29.812095  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) Calling .GetSSHPort
	I0526 21:48:29.812281  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) Calling .GetSSHKeyPath
	I0526 21:48:29.812439  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) Calling .GetSSHKeyPath
	I0526 21:48:29.812567  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) Calling .GetSSHUsername
	I0526 21:48:29.812722  555713 main.go:128] libmachine: Using SSH client type: native
	I0526 21:48:29.812920  555713 main.go:128] libmachine: &{{{<nil> 0 [] [] []} docker [0x802c00] 0x802bc0 <nil>  [] 0s} 192.168.61.5 22 <nil> <nil>}
	I0526 21:48:29.812940  555713 main.go:128] libmachine: About to run SSH command:
	cat /etc/os-release
	I0526 21:48:29.935050  555713 main.go:128] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2020.02.12
	ID=buildroot
	VERSION_ID=2020.02.12
	PRETTY_NAME="Buildroot 2020.02.12"
	
	I0526 21:48:29.935147  555713 main.go:128] libmachine: found compatible host: buildroot
	I0526 21:48:29.935164  555713 main.go:128] libmachine: Provisioning with buildroot...
	I0526 21:48:29.935177  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) Calling .GetMachineName
	I0526 21:48:29.935404  555713 buildroot.go:166] provisioning hostname "force-systemd-env-20210526214750-510955"
	I0526 21:48:29.935433  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) Calling .GetMachineName
	I0526 21:48:29.935594  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) Calling .GetSSHHostname
	I0526 21:48:29.942123  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) DBG | domain force-systemd-env-20210526214750-510955 has defined MAC address 52:54:00:b1:73:1b in network mk-force-systemd-env-20210526214750-510955
	I0526 21:48:29.942510  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:73:1b", ip: ""} in network mk-force-systemd-env-20210526214750-510955: {Iface:virbr3 ExpiryTime:2021-05-26 22:48:22 +0000 UTC Type:0 Mac:52:54:00:b1:73:1b Iaid: IPaddr:192.168.61.5 Prefix:24 Hostname:force-systemd-env-20210526214750-510955 Clientid:01:52:54:00:b1:73:1b}
	I0526 21:48:29.942543  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) DBG | domain force-systemd-env-20210526214750-510955 has defined IP address 192.168.61.5 and MAC address 52:54:00:b1:73:1b in network mk-force-systemd-env-20210526214750-510955
	I0526 21:48:29.942720  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) Calling .GetSSHPort
	I0526 21:48:29.942891  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) Calling .GetSSHKeyPath
	I0526 21:48:29.943023  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) Calling .GetSSHKeyPath
	I0526 21:48:29.943157  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) Calling .GetSSHUsername
	I0526 21:48:29.943315  555713 main.go:128] libmachine: Using SSH client type: native
	I0526 21:48:29.943504  555713 main.go:128] libmachine: &{{{<nil> 0 [] [] []} docker [0x802c00] 0x802bc0 <nil>  [] 0s} 192.168.61.5 22 <nil> <nil>}
	I0526 21:48:29.943525  555713 main.go:128] libmachine: About to run SSH command:
	sudo hostname force-systemd-env-20210526214750-510955 && echo "force-systemd-env-20210526214750-510955" | sudo tee /etc/hostname
	I0526 21:48:30.077563  555713 main.go:128] libmachine: SSH cmd err, output: <nil>: force-systemd-env-20210526214750-510955
	
	I0526 21:48:30.077596  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) Calling .GetSSHHostname
	I0526 21:48:30.084247  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) DBG | domain force-systemd-env-20210526214750-510955 has defined MAC address 52:54:00:b1:73:1b in network mk-force-systemd-env-20210526214750-510955
	I0526 21:48:30.084682  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:73:1b", ip: ""} in network mk-force-systemd-env-20210526214750-510955: {Iface:virbr3 ExpiryTime:2021-05-26 22:48:22 +0000 UTC Type:0 Mac:52:54:00:b1:73:1b Iaid: IPaddr:192.168.61.5 Prefix:24 Hostname:force-systemd-env-20210526214750-510955 Clientid:01:52:54:00:b1:73:1b}
	I0526 21:48:30.084712  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) DBG | domain force-systemd-env-20210526214750-510955 has defined IP address 192.168.61.5 and MAC address 52:54:00:b1:73:1b in network mk-force-systemd-env-20210526214750-510955
	I0526 21:48:30.084927  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) Calling .GetSSHPort
	I0526 21:48:30.085101  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) Calling .GetSSHKeyPath
	I0526 21:48:30.085282  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) Calling .GetSSHKeyPath
	I0526 21:48:30.085441  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) Calling .GetSSHUsername
	I0526 21:48:30.085609  555713 main.go:128] libmachine: Using SSH client type: native
	I0526 21:48:30.085836  555713 main.go:128] libmachine: &{{{<nil> 0 [] [] []} docker [0x802c00] 0x802bc0 <nil>  [] 0s} 192.168.61.5 22 <nil> <nil>}
	I0526 21:48:30.085867  555713 main.go:128] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-env-20210526214750-510955' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-env-20210526214750-510955/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-env-20210526214750-510955' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0526 21:48:30.221810  555713 main.go:128] libmachine: SSH cmd err, output: <nil>: 
	I0526 21:48:30.221850  555713 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikub
e/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube}
	I0526 21:48:30.221874  555713 buildroot.go:174] setting up certificates
	I0526 21:48:30.221885  555713 provision.go:83] configureAuth start
	I0526 21:48:30.221899  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) Calling .GetMachineName
	I0526 21:48:30.222199  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) Calling .GetIP
	I0526 21:48:30.228921  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) DBG | domain force-systemd-env-20210526214750-510955 has defined MAC address 52:54:00:b1:73:1b in network mk-force-systemd-env-20210526214750-510955
	I0526 21:48:30.229506  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:73:1b", ip: ""} in network mk-force-systemd-env-20210526214750-510955: {Iface:virbr3 ExpiryTime:2021-05-26 22:48:22 +0000 UTC Type:0 Mac:52:54:00:b1:73:1b Iaid: IPaddr:192.168.61.5 Prefix:24 Hostname:force-systemd-env-20210526214750-510955 Clientid:01:52:54:00:b1:73:1b}
	I0526 21:48:30.229537  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) Calling .GetSSHHostname
	I0526 21:48:30.229609  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) DBG | domain force-systemd-env-20210526214750-510955 has defined IP address 192.168.61.5 and MAC address 52:54:00:b1:73:1b in network mk-force-systemd-env-20210526214750-510955
	I0526 21:48:30.234933  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) DBG | domain force-systemd-env-20210526214750-510955 has defined MAC address 52:54:00:b1:73:1b in network mk-force-systemd-env-20210526214750-510955
	I0526 21:48:30.235333  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:73:1b", ip: ""} in network mk-force-systemd-env-20210526214750-510955: {Iface:virbr3 ExpiryTime:2021-05-26 22:48:22 +0000 UTC Type:0 Mac:52:54:00:b1:73:1b Iaid: IPaddr:192.168.61.5 Prefix:24 Hostname:force-systemd-env-20210526214750-510955 Clientid:01:52:54:00:b1:73:1b}
	I0526 21:48:30.235361  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) DBG | domain force-systemd-env-20210526214750-510955 has defined IP address 192.168.61.5 and MAC address 52:54:00:b1:73:1b in network mk-force-systemd-env-20210526214750-510955
	I0526 21:48:30.235495  555713 provision.go:137] copyHostCerts
	I0526 21:48:30.235527  555713 vm_assets.go:98] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cert.pem
	I0526 21:48:30.235575  555713 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cert.pem, removing ...
	I0526 21:48:30.235587  555713 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cert.pem
	I0526 21:48:30.235641  555713 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cert.pem (1123 bytes)
	I0526 21:48:30.235719  555713 vm_assets.go:98] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/key.pem
	I0526 21:48:30.235742  555713 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/key.pem, removing ...
	I0526 21:48:30.235752  555713 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/key.pem
	I0526 21:48:30.235774  555713 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/key.pem (1679 bytes)
	I0526 21:48:30.235896  555713 vm_assets.go:98] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/ca.pem
	I0526 21:48:30.235925  555713 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/ca.pem, removing ...
	I0526 21:48:30.235932  555713 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/ca.pem
	I0526 21:48:30.235959  555713 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/ca.pem (1078 bytes)
	I0526 21:48:30.236021  555713 provision.go:111] generating server cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/ca-key.pem org=jenkins.force-systemd-env-20210526214750-510955 san=[192.168.61.5 192.168.61.5 localhost 127.0.0.1 minikube force-systemd-env-20210526214750-510955]
	I0526 21:48:30.570789  555713 provision.go:171] copyRemoteCerts
	I0526 21:48:30.570881  555713 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0526 21:48:30.570926  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) Calling .GetSSHHostname
	I0526 21:48:30.577169  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) DBG | domain force-systemd-env-20210526214750-510955 has defined MAC address 52:54:00:b1:73:1b in network mk-force-systemd-env-20210526214750-510955
	I0526 21:48:30.577498  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:73:1b", ip: ""} in network mk-force-systemd-env-20210526214750-510955: {Iface:virbr3 ExpiryTime:2021-05-26 22:48:22 +0000 UTC Type:0 Mac:52:54:00:b1:73:1b Iaid: IPaddr:192.168.61.5 Prefix:24 Hostname:force-systemd-env-20210526214750-510955 Clientid:01:52:54:00:b1:73:1b}
	I0526 21:48:30.577533  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) DBG | domain force-systemd-env-20210526214750-510955 has defined IP address 192.168.61.5 and MAC address 52:54:00:b1:73:1b in network mk-force-systemd-env-20210526214750-510955
	I0526 21:48:30.577679  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) Calling .GetSSHPort
	I0526 21:48:30.577859  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) Calling .GetSSHKeyPath
	I0526 21:48:30.578027  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) Calling .GetSSHUsername
	I0526 21:48:30.578195  555713 sshutil.go:53] new ssh client: &{IP:192.168.61.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/force-systemd-env-20210526214750-510955/id_rsa Username:docker}
	I0526 21:48:30.667029  555713 vm_assets.go:98] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0526 21:48:30.667096  555713 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0526 21:48:30.695201  555713 vm_assets.go:98] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0526 21:48:30.695266  555713 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/server.pem --> /etc/docker/server.pem (1285 bytes)
	I0526 21:48:30.716197  555713 vm_assets.go:98] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0526 21:48:30.716254  555713 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0526 21:48:30.736041  555713 provision.go:86] duration metric: configureAuth took 514.142987ms
	I0526 21:48:30.736070  555713 buildroot.go:189] setting minikube options for container-runtime
	I0526 21:48:30.736280  555713 main.go:128] libmachine: Checking connection to Docker...
	I0526 21:48:30.736306  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) Calling .GetURL
	I0526 21:48:30.739463  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) DBG | Using libvirt version 3000000
	I0526 21:48:30.744985  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) DBG | domain force-systemd-env-20210526214750-510955 has defined MAC address 52:54:00:b1:73:1b in network mk-force-systemd-env-20210526214750-510955
	I0526 21:48:30.745395  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:73:1b", ip: ""} in network mk-force-systemd-env-20210526214750-510955: {Iface:virbr3 ExpiryTime:2021-05-26 22:48:22 +0000 UTC Type:0 Mac:52:54:00:b1:73:1b Iaid: IPaddr:192.168.61.5 Prefix:24 Hostname:force-systemd-env-20210526214750-510955 Clientid:01:52:54:00:b1:73:1b}
	I0526 21:48:30.745421  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) DBG | domain force-systemd-env-20210526214750-510955 has defined IP address 192.168.61.5 and MAC address 52:54:00:b1:73:1b in network mk-force-systemd-env-20210526214750-510955
	I0526 21:48:30.745663  555713 main.go:128] libmachine: Docker is up and running!
	I0526 21:48:30.745684  555713 main.go:128] libmachine: Reticulating splines...
	I0526 21:48:30.745692  555713 client.go:171] LocalClient.Create took 24.345986826s
	I0526 21:48:30.745714  555713 start.go:168] duration metric: libmachine.API.Create for "force-systemd-env-20210526214750-510955" took 24.346046745s
	I0526 21:48:30.745727  555713 start.go:267] post-start starting for "force-systemd-env-20210526214750-510955" (driver="kvm2")
	I0526 21:48:30.745734  555713 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0526 21:48:30.745755  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) Calling .DriverName
	I0526 21:48:30.746005  555713 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0526 21:48:30.746035  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) Calling .GetSSHHostname
	I0526 21:48:30.751102  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) DBG | domain force-systemd-env-20210526214750-510955 has defined MAC address 52:54:00:b1:73:1b in network mk-force-systemd-env-20210526214750-510955
	I0526 21:48:30.751520  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:73:1b", ip: ""} in network mk-force-systemd-env-20210526214750-510955: {Iface:virbr3 ExpiryTime:2021-05-26 22:48:22 +0000 UTC Type:0 Mac:52:54:00:b1:73:1b Iaid: IPaddr:192.168.61.5 Prefix:24 Hostname:force-systemd-env-20210526214750-510955 Clientid:01:52:54:00:b1:73:1b}
	I0526 21:48:30.751552  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) DBG | domain force-systemd-env-20210526214750-510955 has defined IP address 192.168.61.5 and MAC address 52:54:00:b1:73:1b in network mk-force-systemd-env-20210526214750-510955
	I0526 21:48:30.751773  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) Calling .GetSSHPort
	I0526 21:48:30.751959  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) Calling .GetSSHKeyPath
	I0526 21:48:30.752131  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) Calling .GetSSHUsername
	I0526 21:48:30.752284  555713 sshutil.go:53] new ssh client: &{IP:192.168.61.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/force-systemd-env-20210526214750-510955/id_rsa Username:docker}
	I0526 21:48:30.838368  555713 ssh_runner.go:149] Run: cat /etc/os-release
	I0526 21:48:30.844356  555713 info.go:137] Remote host: Buildroot 2020.02.12
	I0526 21:48:30.844385  555713 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/addons for local assets ...
	I0526 21:48:30.844457  555713 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/files for local assets ...
	I0526 21:48:30.844578  555713 start.go:270] post-start completed in 98.835481ms
	I0526 21:48:30.844615  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) Calling .GetConfigRaw
	I0526 21:48:30.845331  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) Calling .GetIP
	I0526 21:48:30.851386  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) DBG | domain force-systemd-env-20210526214750-510955 has defined MAC address 52:54:00:b1:73:1b in network mk-force-systemd-env-20210526214750-510955
	I0526 21:48:30.851805  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:73:1b", ip: ""} in network mk-force-systemd-env-20210526214750-510955: {Iface:virbr3 ExpiryTime:2021-05-26 22:48:22 +0000 UTC Type:0 Mac:52:54:00:b1:73:1b Iaid: IPaddr:192.168.61.5 Prefix:24 Hostname:force-systemd-env-20210526214750-510955 Clientid:01:52:54:00:b1:73:1b}
	I0526 21:48:30.851836  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) DBG | domain force-systemd-env-20210526214750-510955 has defined IP address 192.168.61.5 and MAC address 52:54:00:b1:73:1b in network mk-force-systemd-env-20210526214750-510955
	I0526 21:48:30.852101  555713 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/force-systemd-env-20210526214750-510955/config.json ...
	I0526 21:48:30.852283  555713 start.go:129] duration metric: createHost completed in 24.470328746s
	I0526 21:48:30.852302  555713 start.go:80] releasing machines lock for "force-systemd-env-20210526214750-510955", held for 24.47046656s
	I0526 21:48:30.852346  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) Calling .DriverName
	I0526 21:48:30.852536  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) Calling .GetIP
	I0526 21:48:30.857850  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) DBG | domain force-systemd-env-20210526214750-510955 has defined MAC address 52:54:00:b1:73:1b in network mk-force-systemd-env-20210526214750-510955
	I0526 21:48:30.858232  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:73:1b", ip: ""} in network mk-force-systemd-env-20210526214750-510955: {Iface:virbr3 ExpiryTime:2021-05-26 22:48:22 +0000 UTC Type:0 Mac:52:54:00:b1:73:1b Iaid: IPaddr:192.168.61.5 Prefix:24 Hostname:force-systemd-env-20210526214750-510955 Clientid:01:52:54:00:b1:73:1b}
	I0526 21:48:30.858263  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) DBG | domain force-systemd-env-20210526214750-510955 has defined IP address 192.168.61.5 and MAC address 52:54:00:b1:73:1b in network mk-force-systemd-env-20210526214750-510955
	I0526 21:48:30.858403  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) Calling .DriverName
	I0526 21:48:30.858544  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) Calling .DriverName
	I0526 21:48:30.858991  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) Calling .DriverName
	I0526 21:48:30.859197  555713 ssh_runner.go:149] Run: systemctl --version
	I0526 21:48:30.859225  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) Calling .GetSSHHostname
	I0526 21:48:30.859302  555713 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0526 21:48:30.859336  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) Calling .GetSSHHostname
	I0526 21:48:30.866942  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) DBG | domain force-systemd-env-20210526214750-510955 has defined MAC address 52:54:00:b1:73:1b in network mk-force-systemd-env-20210526214750-510955
	I0526 21:48:30.867136  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) DBG | domain force-systemd-env-20210526214750-510955 has defined MAC address 52:54:00:b1:73:1b in network mk-force-systemd-env-20210526214750-510955
	I0526 21:48:30.867495  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:73:1b", ip: ""} in network mk-force-systemd-env-20210526214750-510955: {Iface:virbr3 ExpiryTime:2021-05-26 22:48:22 +0000 UTC Type:0 Mac:52:54:00:b1:73:1b Iaid: IPaddr:192.168.61.5 Prefix:24 Hostname:force-systemd-env-20210526214750-510955 Clientid:01:52:54:00:b1:73:1b}
	I0526 21:48:30.867540  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:73:1b", ip: ""} in network mk-force-systemd-env-20210526214750-510955: {Iface:virbr3 ExpiryTime:2021-05-26 22:48:22 +0000 UTC Type:0 Mac:52:54:00:b1:73:1b Iaid: IPaddr:192.168.61.5 Prefix:24 Hostname:force-systemd-env-20210526214750-510955 Clientid:01:52:54:00:b1:73:1b}
	I0526 21:48:30.867657  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) DBG | domain force-systemd-env-20210526214750-510955 has defined IP address 192.168.61.5 and MAC address 52:54:00:b1:73:1b in network mk-force-systemd-env-20210526214750-510955
	I0526 21:48:30.867708  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) DBG | domain force-systemd-env-20210526214750-510955 has defined IP address 192.168.61.5 and MAC address 52:54:00:b1:73:1b in network mk-force-systemd-env-20210526214750-510955
	I0526 21:48:30.868010  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) Calling .GetSSHPort
	I0526 21:48:30.868194  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) Calling .GetSSHKeyPath
	I0526 21:48:30.868228  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) Calling .GetSSHPort
	I0526 21:48:30.868391  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) Calling .GetSSHKeyPath
	I0526 21:48:30.868391  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) Calling .GetSSHUsername
	I0526 21:48:30.869431  555713 main.go:128] libmachine: (force-systemd-env-20210526214750-510955) Calling .GetSSHUsername
	I0526 21:48:30.869431  555713 sshutil.go:53] new ssh client: &{IP:192.168.61.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/force-systemd-env-20210526214750-510955/id_rsa Username:docker}
	I0526 21:48:30.869589  555713 sshutil.go:53] new ssh client: &{IP:192.168.61.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/force-systemd-env-20210526214750-510955/id_rsa Username:docker}
	I0526 21:48:30.981415  555713 preload.go:98] Checking if preload exists for k8s version v1.20.2 and runtime containerd
	I0526 21:48:30.981469  555713 preload.go:106] Found local preload: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.20.2-containerd-overlay2-amd64.tar.lz4
	I0526 21:48:30.981583  555713 ssh_runner.go:149] Run: sudo crictl images --output json
	I0526 21:48:35.011367  555713 ssh_runner.go:189] Completed: sudo crictl images --output json: (4.029757848s)
	I0526 21:48:35.011489  555713 containerd.go:566] couldn't find preloaded image for "k8s.gcr.io/kube-apiserver:v1.20.2". assuming images are not preloaded.
	I0526 21:48:35.011551  555713 ssh_runner.go:149] Run: which lz4
	I0526 21:48:35.016969  555713 vm_assets.go:98] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.20.2-containerd-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0526 21:48:35.017056  555713 ssh_runner.go:149] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0526 21:48:35.022703  555713 ssh_runner.go:306] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	I0526 21:48:35.022733  555713 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.20.2-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (953722271 bytes)
	I0526 21:48:38.520221  555713 containerd.go:503] Took 3.503187 seconds to copy over tarball
	I0526 21:48:38.520312  555713 ssh_runner.go:149] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0526 21:48:47.053890  555713 ssh_runner.go:189] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (8.533515998s)
	I0526 21:48:47.598524  555713 containerd.go:510] Took 9.078258 seconds t extract the tarball
	I0526 21:48:47.598582  555713 ssh_runner.go:100] rm: /preloaded.tar.lz4
	I0526 21:48:47.681532  555713 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0526 21:48:47.826890  555713 ssh_runner.go:149] Run: sudo systemctl restart containerd
	I0526 21:48:47.877116  555713 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0526 21:48:47.888960  555713 ssh_runner.go:149] Run: sudo systemctl stop -f crio
	I0526 21:48:48.470413  555713 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0526 21:48:48.482922  555713 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0526 21:48:48.493518  555713 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0526 21:48:48.507267  555713 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %s "cm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKCltncnBjXQogIGFkZHJlc3MgPSAiL3J1bi9jb250YWluZXJkL2NvbnRhaW5lcmQuc29jayIKICB1aWQgPSAwCiAgZ2lkID0gMAogIG1heF9yZWN2X21lc3NhZ2Vfc2l6ZSA9IDE2Nzc3MjE2CiAgbWF4X3NlbmRfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKCltkZWJ1Z10KICBhZGRyZXNzID0gIiIKICB1aWQgPSAwCiAgZ2lkID0gMAogIGxldmVsID0gIiIKClttZXRyaWNzXQogIGFkZHJlc3MgPSAiIgogIGdycGNfaGlzdG9ncmFtID0gZmFsc2UKCltjZ3JvdXBdCiAgcGF0aCA9ICIiCgpbcGx1Z2luc10KICBbcGx1Z2lucy5jZ3JvdXBzXQogICAgbm9fcHJvbWV0aGV1cyA9IGZhbHNlCiAgW3BsdWdpbnMuY3JpXQogICAgc3RyZWFtX3NlcnZlcl9hZGRyZXNzID0gIiIKICAgIHN0cmVhbV9zZXJ2ZXJfcG9ydCA9ICIxMDAxMCIKICAgIGVuYWJsZV9zZWxpbnV4ID0gZmFsc2UKICAgIHNhbmRib3hfaW1hZ2UgPSAiazhzLmdjci5pby9wYXVzZTozLjIiCiAgICBzdGF0c19jb2xsZWN0X3BlcmlvZCA9IDEwCiAgICBzeXN0ZW1kX2Nncm91cCA9IHRydWUKICAgIGVuYWJsZV90bHNfc3RyZWFtaW5nID0gZmFsc2UKICAgIG1heF9jb250YWluZXJfbG9nX2xpbmVfc2l6ZSA9IDE2Mzg
0CiAgICBbcGx1Z2lucy5jcmkuY29udGFpbmVyZF0KICAgICAgc25hcHNob3R0ZXIgPSAib3ZlcmxheWZzIgogICAgICBbcGx1Z2lucy5jcmkuY29udGFpbmVyZC5kZWZhdWx0X3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgICBbcGx1Z2lucy5jcmkuY29udGFpbmVyZC5kZWZhdWx0X3J1bnRpbWUub3B0aW9uc10KICAgICAgICAgIE5vUGl2b3RSb290ID0gdHJ1ZQogICAgICBbcGx1Z2lucy5jcmkuY29udGFpbmVyZC51bnRydXN0ZWRfd29ya2xvYWRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiIgogICAgICAgIHJ1bnRpbWVfZW5naW5lID0gIiIKICAgICAgICBydW50aW1lX3Jvb3QgPSAiIgogICAgW3BsdWdpbnMuY3JpLmNuaV0KICAgICAgYmluX2RpciA9ICIvb3B0L2NuaS9iaW4iCiAgICAgIGNvbmZfZGlyID0gIi9ldGMvY25pL25ldC5kIgogICAgICBjb25mX3RlbXBsYXRlID0gIiIKICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeV0KICAgICAgW3BsdWdpbnMuY3JpLnJlZ2lzdHJ5Lm1pcnJvcnNdCiAgICAgICAgW3BsdWdpbnMuY3JpLnJlZ2lzdHJ5Lm1pcnJvcnMuImRvY2tlci5pbyJdCiAgICAgICAgICBlbmRwb2ludCA9IFsiaHR0cHM6Ly9yZWdpc3RyeS0xLmRvY2tlci5pbyJdCiAgICAgICAgW3BsdWdpbnMuZGlmZi1zZXJ2aWNlXQogICAgZGVmYXVsdCA9IFsid2Fsa2luZyJdCiAgW3BsdWdpbnMuc2NoZWR1bGVyXQogICAgcGF1c2VfdGhyZXNob2xkID0gMC4wMgo
gICAgZGVsZXRpb25fdGhyZXNob2xkID0gMAogICAgbXV0YXRpb25fdGhyZXNob2xkID0gMTAwCiAgICBzY2hlZHVsZV9kZWxheSA9ICIwcyIKICAgIHN0YXJ0dXBfZGVsYXkgPSAiMTAwbXMiCg==" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0526 21:48:48.522195  555713 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0526 21:48:48.528848  555713 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0526 21:48:48.528921  555713 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0526 21:48:48.545992  555713 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0526 21:48:48.552818  555713 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0526 21:48:48.678121  555713 ssh_runner.go:149] Run: sudo systemctl restart containerd
	I0526 21:48:49.354804  555713 start.go:376] Will wait 60s for socket path /run/containerd/containerd.sock
	I0526 21:48:49.354881  555713 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0526 21:48:49.361576  555713 retry.go:31] will retry after 1.104660288s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/run/containerd/containerd.sock': No such file or directory
	I0526 21:48:50.466751  555713 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0526 21:48:50.474022  555713 start.go:401] Will wait 60s for crictl version
	I0526 21:48:50.474079  555713 ssh_runner.go:149] Run: sudo crictl version
	I0526 21:48:50.494321  555713 retry.go:31] will retry after 14.405090881s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2021-05-26T21:48:50Z" level=fatal msg="getting the runtime version: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	I0526 21:49:04.901250  555713 ssh_runner.go:149] Run: sudo crictl version
	I0526 21:49:04.920627  555713 retry.go:31] will retry after 17.468400798s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2021-05-26T21:49:04Z" level=fatal msg="getting the runtime version: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	I0526 21:49:22.389907  555713 ssh_runner.go:149] Run: sudo crictl version
	I0526 21:49:22.413806  555713 retry.go:31] will retry after 21.098569212s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2021-05-26T21:49:22Z" level=fatal msg="getting the runtime version: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	I0526 21:49:43.513971  555713 ssh_runner.go:149] Run: sudo crictl version
	I0526 21:49:43.539851  555713 retry.go:31] will retry after 31.206515526s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2021-05-26T21:49:43Z" level=fatal msg="getting the runtime version: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	I0526 21:50:14.746880  555713 ssh_runner.go:149] Run: sudo crictl version
	I0526 21:50:14.777026  555713 out.go:170] 
	W0526 21:50:14.777165  555713 out.go:235] X Exiting due to RUNTIME_ENABLE: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2021-05-26T21:50:14Z" level=fatal msg="getting the runtime version: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	
	X Exiting due to RUNTIME_ENABLE: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2021-05-26T21:50:14Z" level=fatal msg="getting the runtime version: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	
	W0526 21:50:14.777179  555713 out.go:235] * 
	* 
	W0526 21:50:14.779489  555713 out.go:235] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	W0526 21:50:14.779519  555713 out.go:235] │                                                                                                                                                             │
	│                                                                                                                                                             │
	W0526 21:50:14.779530  555713 out.go:235] │    * If the above advice does not help, please let us know:                                                                                                 │
	│    * If the above advice does not help, please let us know:                                                                                                 │
	W0526 21:50:14.779538  555713 out.go:235] │      https://github.com/kubernetes/minikube/issues/new/choose                                                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                                                               │
	W0526 21:50:14.779546  555713 out.go:235] │                                                                                                                                                             │
	│                                                                                                                                                             │
	W0526 21:50:14.779557  555713 out.go:235] │    * Please attach the following file to the GitHub issue:                                                                                                  │
	│    * Please attach the following file to the GitHub issue:                                                                                                  │
	W0526 21:50:14.779569  555713 out.go:235] │    * - /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/logs/lastStart.txt    │
	│    * - /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/logs/lastStart.txt    │
	W0526 21:50:14.779576  555713 out.go:235] │                                                                                                                                                             │
	│                                                                                                                                                             │
	W0526 21:50:14.779585  555713 out.go:235] ╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	W0526 21:50:14.779595  555713 out.go:235] 
	
	I0526 21:50:14.781177  555713 out.go:170] 

                                                
                                                
** /stderr **
docker_test.go:138: failed to start minikube with args: "out/minikube-linux-amd64 start -p force-systemd-env-20210526214750-510955 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd" : exit status 90
docker_test.go:113: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-20210526214750-510955 ssh "cat /etc/containerd/config.toml"
docker_test.go:147: *** TestForceSystemdEnv FAILED at 2021-05-26 21:50:15.017660215 +0000 UTC m=+4250.135208663
helpers_test.go:218: -----------------------post-mortem--------------------------------
helpers_test.go:235: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p force-systemd-env-20210526214750-510955 -n force-systemd-env-20210526214750-510955
helpers_test.go:235: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p force-systemd-env-20210526214750-510955 -n force-systemd-env-20210526214750-510955: exit status 6 (257.551974ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0526 21:50:15.262985  556815 status.go:413] kubeconfig endpoint: extract IP: "force-systemd-env-20210526214750-510955" does not appear in /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:235: status error: exit status 6 (may be ok)
helpers_test.go:237: "force-systemd-env-20210526214750-510955" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:171: Cleaning up "force-systemd-env-20210526214750-510955" profile ...
helpers_test.go:174: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-20210526214750-510955
helpers_test.go:174: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-20210526214750-510955: (1.062106787s)
--- FAIL: TestForceSystemdEnv (145.66s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
json_output_test.go:111: step 0 has already been assigned to another step:
Stopping node "json-output-20210526211830-510955"  ...
Cannot use for:
Stopping node "json-output-20210526211830-510955"  ...
[Validation: valid
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 103141c1-57da-4626-b7a6-73363884e78f
datacontenttype: application/json
Data,
{
"currentstep": "0",
"message": "Stopping node \"json-output-20210526211830-510955\"  ...",
"name": "Stopping",
"totalsteps": "2"
}
Validation: valid
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 4b0b4905-c375-4d52-bff9-6a6d70e06bba
datacontenttype: application/json
Data,
{
"currentstep": "0",
"message": "Stopping node \"json-output-20210526211830-510955\"  ...",
"name": "Stopping",
"totalsteps": "2"
}
Validation: valid
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 9b6da9a5-71c1-4f9e-a865-8c55e8ae38c7
datacontenttype: application/json
Data,
{
"currentstep": "2",
"message": "1 nodes stopped.",
"name": "Done",
"totalsteps": "2"
}
]
--- FAIL: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
json_output_test.go:130: current step is not in increasing order: [Validation: valid
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 103141c1-57da-4626-b7a6-73363884e78f
datacontenttype: application/json
Data,
{
"currentstep": "0",
"message": "Stopping node \"json-output-20210526211830-510955\"  ...",
"name": "Stopping",
"totalsteps": "2"
}
Validation: valid
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 4b0b4905-c375-4d52-bff9-6a6d70e06bba
datacontenttype: application/json
Data,
{
"currentstep": "0",
"message": "Stopping node \"json-output-20210526211830-510955\"  ...",
"name": "Stopping",
"totalsteps": "2"
}
Validation: valid
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 9b6da9a5-71c1-4f9e-a865-8c55e8ae38c7
datacontenttype: application/json
Data,
{
"currentstep": "2",
"message": "1 nodes stopped.",
"name": "Done",
"totalsteps": "2"
}
]
--- FAIL: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (85.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:190: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210526212238-510955 node stop m03
E0526 21:27:50.135616  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/functional-20210526211257-510955/client.crt: no such file or directory
E0526 21:28:18.143480  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/functional-20210526211257-510955/client.crt: no such file or directory
multinode_test.go:190: (dbg) Done: out/minikube-linux-amd64 -p multinode-20210526212238-510955 node stop m03: (1m0.253221654s)
multinode_test.go:196: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210526212238-510955 status
multinode_test.go:196: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20210526212238-510955 status: exit status 3 (18.967908742s)

                                                
                                                
-- stdout --
	multinode-20210526212238-510955
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20210526212238-510955-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20210526212238-510955-m03
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0526 21:28:47.641145  529048 status.go:374] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.18:22: connect: no route to host
	E0526 21:28:47.641186  529048 status.go:258] status error: NewSession: new client: new client: dial tcp 192.168.39.18:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:199: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-20210526212238-510955 status" : exit status 3
helpers_test.go:218: -----------------------post-mortem--------------------------------
helpers_test.go:235: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-20210526212238-510955 -n multinode-20210526212238-510955
helpers_test.go:240: <<< TestMultiNode/serial/StopNode FAILED: start of post-mortem logs <<<
helpers_test.go:241: ======>  post-mortem[TestMultiNode/serial/StopNode]: minikube logs <======
helpers_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210526212238-510955 logs -n 25
helpers_test.go:243: (dbg) Done: out/minikube-linux-amd64 -p multinode-20210526212238-510955 logs -n 25: (3.04952597s)
helpers_test.go:248: TestMultiNode/serial/StopNode logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------------|---------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                             Args                             |             Profile             |  User   | Version |          Start Time           |           End Time            |
	|---------|--------------------------------------------------------------|---------------------------------|---------|---------|-------------------------------|-------------------------------|
	| start   | -p                                                           | multinode-20210526212238-510955 | jenkins | v1.20.0 | Wed, 26 May 2021 21:22:38 UTC | Wed, 26 May 2021 21:26:18 UTC |
	|         | multinode-20210526212238-510955                              |                                 |         |         |                               |                               |
	|         | --wait=true --memory=2200                                    |                                 |         |         |                               |                               |
	|         | --nodes=2 -v=8                                               |                                 |         |         |                               |                               |
	|         | --alsologtostderr --driver=kvm2                              |                                 |         |         |                               |                               |
	|         |  --container-runtime=containerd                              |                                 |         |         |                               |                               |
	| kubectl | -p multinode-20210526212238-510955 -- apply -f               | multinode-20210526212238-510955 | jenkins | v1.20.0 | Wed, 26 May 2021 21:26:18 UTC | Wed, 26 May 2021 21:26:19 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml            |                                 |         |         |                               |                               |
	| kubectl | -p                                                           | multinode-20210526212238-510955 | jenkins | v1.20.0 | Wed, 26 May 2021 21:26:19 UTC | Wed, 26 May 2021 21:26:21 UTC |
	|         | multinode-20210526212238-510955                              |                                 |         |         |                               |                               |
	|         | -- rollout status                                            |                                 |         |         |                               |                               |
	|         | deployment/busybox                                           |                                 |         |         |                               |                               |
	| kubectl | -p multinode-20210526212238-510955                           | multinode-20210526212238-510955 | jenkins | v1.20.0 | Wed, 26 May 2021 21:26:21 UTC | Wed, 26 May 2021 21:26:21 UTC |
	|         | -- get pods -o                                               |                                 |         |         |                               |                               |
	|         | jsonpath='{.items[*].status.podIP}'                          |                                 |         |         |                               |                               |
	| kubectl | -p multinode-20210526212238-510955                           | multinode-20210526212238-510955 | jenkins | v1.20.0 | Wed, 26 May 2021 21:26:21 UTC | Wed, 26 May 2021 21:26:21 UTC |
	|         | -- get pods -o                                               |                                 |         |         |                               |                               |
	|         | jsonpath='{.items[*].metadata.name}'                         |                                 |         |         |                               |                               |
	| kubectl | -p                                                           | multinode-20210526212238-510955 | jenkins | v1.20.0 | Wed, 26 May 2021 21:26:21 UTC | Wed, 26 May 2021 21:26:22 UTC |
	|         | multinode-20210526212238-510955                              |                                 |         |         |                               |                               |
	|         | -- exec                                                      |                                 |         |         |                               |                               |
	|         | busybox-6cd5ff77cb-4g265 --                                  |                                 |         |         |                               |                               |
	|         | nslookup kubernetes.io                                       |                                 |         |         |                               |                               |
	| kubectl | -p                                                           | multinode-20210526212238-510955 | jenkins | v1.20.0 | Wed, 26 May 2021 21:26:22 UTC | Wed, 26 May 2021 21:26:22 UTC |
	|         | multinode-20210526212238-510955                              |                                 |         |         |                               |                               |
	|         | -- exec                                                      |                                 |         |         |                               |                               |
	|         | busybox-6cd5ff77cb-dlslt --                                  |                                 |         |         |                               |                               |
	|         | nslookup kubernetes.io                                       |                                 |         |         |                               |                               |
	| kubectl | -p                                                           | multinode-20210526212238-510955 | jenkins | v1.20.0 | Wed, 26 May 2021 21:26:22 UTC | Wed, 26 May 2021 21:26:22 UTC |
	|         | multinode-20210526212238-510955                              |                                 |         |         |                               |                               |
	|         | -- exec                                                      |                                 |         |         |                               |                               |
	|         | busybox-6cd5ff77cb-4g265 --                                  |                                 |         |         |                               |                               |
	|         | nslookup kubernetes.default                                  |                                 |         |         |                               |                               |
	| kubectl | -p                                                           | multinode-20210526212238-510955 | jenkins | v1.20.0 | Wed, 26 May 2021 21:26:22 UTC | Wed, 26 May 2021 21:26:23 UTC |
	|         | multinode-20210526212238-510955                              |                                 |         |         |                               |                               |
	|         | -- exec                                                      |                                 |         |         |                               |                               |
	|         | busybox-6cd5ff77cb-dlslt --                                  |                                 |         |         |                               |                               |
	|         | nslookup kubernetes.default                                  |                                 |         |         |                               |                               |
	| kubectl | -p multinode-20210526212238-510955                           | multinode-20210526212238-510955 | jenkins | v1.20.0 | Wed, 26 May 2021 21:26:23 UTC | Wed, 26 May 2021 21:26:23 UTC |
	|         | -- exec busybox-6cd5ff77cb-4g265                             |                                 |         |         |                               |                               |
	|         | -- nslookup                                                  |                                 |         |         |                               |                               |
	|         | kubernetes.default.svc.cluster.local                         |                                 |         |         |                               |                               |
	| kubectl | -p multinode-20210526212238-510955                           | multinode-20210526212238-510955 | jenkins | v1.20.0 | Wed, 26 May 2021 21:26:23 UTC | Wed, 26 May 2021 21:26:23 UTC |
	|         | -- exec busybox-6cd5ff77cb-dlslt                             |                                 |         |         |                               |                               |
	|         | -- nslookup                                                  |                                 |         |         |                               |                               |
	|         | kubernetes.default.svc.cluster.local                         |                                 |         |         |                               |                               |
	| kubectl | -p multinode-20210526212238-510955                           | multinode-20210526212238-510955 | jenkins | v1.20.0 | Wed, 26 May 2021 21:26:23 UTC | Wed, 26 May 2021 21:26:23 UTC |
	|         | -- get pods -o                                               |                                 |         |         |                               |                               |
	|         | jsonpath='{.items[*].metadata.name}'                         |                                 |         |         |                               |                               |
	| kubectl | -p                                                           | multinode-20210526212238-510955 | jenkins | v1.20.0 | Wed, 26 May 2021 21:26:23 UTC | Wed, 26 May 2021 21:26:23 UTC |
	|         | multinode-20210526212238-510955                              |                                 |         |         |                               |                               |
	|         | -- exec                                                      |                                 |         |         |                               |                               |
	|         | busybox-6cd5ff77cb-4g265                                     |                                 |         |         |                               |                               |
	|         | -- sh -c nslookup                                            |                                 |         |         |                               |                               |
	|         | host.minikube.internal | awk                                 |                                 |         |         |                               |                               |
	|         | 'NR==5' | cut -d' ' -f3                                      |                                 |         |         |                               |                               |
	| ssh     | -p                                                           | multinode-20210526212238-510955 | jenkins | v1.20.0 | Wed, 26 May 2021 21:26:23 UTC | Wed, 26 May 2021 21:26:24 UTC |
	|         | multinode-20210526212238-510955                              |                                 |         |         |                               |                               |
	|         | ip -4 -br -o a s eth0 | tr -s '                              |                                 |         |         |                               |                               |
	|         | ' | cut -d' ' -f3                                            |                                 |         |         |                               |                               |
	| kubectl | -p                                                           | multinode-20210526212238-510955 | jenkins | v1.20.0 | Wed, 26 May 2021 21:26:24 UTC | Wed, 26 May 2021 21:26:24 UTC |
	|         | multinode-20210526212238-510955                              |                                 |         |         |                               |                               |
	|         | -- exec                                                      |                                 |         |         |                               |                               |
	|         | busybox-6cd5ff77cb-dlslt                                     |                                 |         |         |                               |                               |
	|         | -- sh -c nslookup                                            |                                 |         |         |                               |                               |
	|         | host.minikube.internal | awk                                 |                                 |         |         |                               |                               |
	|         | 'NR==5' | cut -d' ' -f3                                      |                                 |         |         |                               |                               |
	| ssh     | -p                                                           | multinode-20210526212238-510955 | jenkins | v1.20.0 | Wed, 26 May 2021 21:26:24 UTC | Wed, 26 May 2021 21:26:24 UTC |
	|         | multinode-20210526212238-510955                              |                                 |         |         |                               |                               |
	|         | ip -4 -br -o a s eth0 | tr -s '                              |                                 |         |         |                               |                               |
	|         | ' | cut -d' ' -f3                                            |                                 |         |         |                               |                               |
	| node    | add -p                                                       | multinode-20210526212238-510955 | jenkins | v1.20.0 | Wed, 26 May 2021 21:26:24 UTC | Wed, 26 May 2021 21:27:25 UTC |
	|         | multinode-20210526212238-510955                              |                                 |         |         |                               |                               |
	|         | -v 3 --alsologtostderr                                       |                                 |         |         |                               |                               |
	| profile | list --output json                                           | minikube                        | jenkins | v1.20.0 | Wed, 26 May 2021 21:27:26 UTC | Wed, 26 May 2021 21:27:26 UTC |
	| -p      | multinode-20210526212238-510955                              | multinode-20210526212238-510955 | jenkins | v1.20.0 | Wed, 26 May 2021 21:27:27 UTC | Wed, 26 May 2021 21:27:27 UTC |
	|         | cp testdata/cp-test.txt                                      |                                 |         |         |                               |                               |
	|         | /home/docker/cp-test.txt                                     |                                 |         |         |                               |                               |
	| -p      | multinode-20210526212238-510955                              | multinode-20210526212238-510955 | jenkins | v1.20.0 | Wed, 26 May 2021 21:27:27 UTC | Wed, 26 May 2021 21:27:27 UTC |
	|         | ssh sudo cat                                                 |                                 |         |         |                               |                               |
	|         | /home/docker/cp-test.txt                                     |                                 |         |         |                               |                               |
	| -p      | multinode-20210526212238-510955 cp testdata/cp-test.txt      | multinode-20210526212238-510955 | jenkins | v1.20.0 | Wed, 26 May 2021 21:27:27 UTC | Wed, 26 May 2021 21:27:27 UTC |
	|         | multinode-20210526212238-510955-m02:/home/docker/cp-test.txt |                                 |         |         |                               |                               |
	| -p      | multinode-20210526212238-510955                              | multinode-20210526212238-510955 | jenkins | v1.20.0 | Wed, 26 May 2021 21:27:27 UTC | Wed, 26 May 2021 21:27:27 UTC |
	|         | ssh -n                                                       |                                 |         |         |                               |                               |
	|         | multinode-20210526212238-510955-m02                          |                                 |         |         |                               |                               |
	|         | sudo cat /home/docker/cp-test.txt                            |                                 |         |         |                               |                               |
	| -p      | multinode-20210526212238-510955 cp testdata/cp-test.txt      | multinode-20210526212238-510955 | jenkins | v1.20.0 | Wed, 26 May 2021 21:27:28 UTC | Wed, 26 May 2021 21:27:28 UTC |
	|         | multinode-20210526212238-510955-m03:/home/docker/cp-test.txt |                                 |         |         |                               |                               |
	| -p      | multinode-20210526212238-510955                              | multinode-20210526212238-510955 | jenkins | v1.20.0 | Wed, 26 May 2021 21:27:28 UTC | Wed, 26 May 2021 21:27:28 UTC |
	|         | ssh -n                                                       |                                 |         |         |                               |                               |
	|         | multinode-20210526212238-510955-m03                          |                                 |         |         |                               |                               |
	|         | sudo cat /home/docker/cp-test.txt                            |                                 |         |         |                               |                               |
	| -p      | multinode-20210526212238-510955                              | multinode-20210526212238-510955 | jenkins | v1.20.0 | Wed, 26 May 2021 21:27:28 UTC | Wed, 26 May 2021 21:28:28 UTC |
	|         | node stop m03                                                |                                 |         |         |                               |                               |
	|---------|--------------------------------------------------------------|---------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/05/26 21:22:38
	Running on machine: debian-jenkins-agent-4
	Binary: Built with gc go1.16.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0526 21:22:38.756182  527485 out.go:291] Setting OutFile to fd 1 ...
	I0526 21:22:38.756246  527485 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0526 21:22:38.756249  527485 out.go:304] Setting ErrFile to fd 2...
	I0526 21:22:38.756252  527485 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0526 21:22:38.756343  527485 root.go:316] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/bin
	I0526 21:22:38.756577  527485 out.go:298] Setting JSON to false
	I0526 21:22:38.791255  527485 start.go:110] hostinfo: {"hostname":"debian-jenkins-agent-4","uptime":18321,"bootTime":1622045838,"procs":142,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-15-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0526 21:22:38.791346  527485 start.go:120] virtualization: kvm guest
	I0526 21:22:38.793833  527485 out.go:170] * [multinode-20210526212238-510955] minikube v1.20.0 on Debian 9.13 (kvm/amd64)
	I0526 21:22:38.793948  527485 notify.go:169] Checking for updates...
	I0526 21:22:38.795567  527485 out.go:170]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/kubeconfig
	I0526 21:22:38.797007  527485 out.go:170]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0526 21:22:38.798452  527485 out.go:170]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube
	I0526 21:22:38.799854  527485 out.go:170]   - MINIKUBE_LOCATION=11504
	I0526 21:22:38.800033  527485 driver.go:331] Setting default libvirt URI to qemu:///system
	I0526 21:22:38.828260  527485 out.go:170] * Using the kvm2 driver based on user configuration
	I0526 21:22:38.828278  527485 start.go:278] selected driver: kvm2
	I0526 21:22:38.828283  527485 start.go:751] validating driver "kvm2" against <nil>
	I0526 21:22:38.828296  527485 start.go:762] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0526 21:22:38.828759  527485 install.go:51] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0526 21:22:38.828916  527485 install.go:116] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0526 21:22:38.839336  527485 install.go:136] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.20.0
	I0526 21:22:38.839382  527485 start_flags.go:259] no existing cluster config was found, will generate one from the flags 
	I0526 21:22:38.839510  527485 start_flags.go:656] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0526 21:22:38.839530  527485 cni.go:93] Creating CNI manager for ""
	I0526 21:22:38.839535  527485 cni.go:154] 0 nodes found, recommending kindnet
	I0526 21:22:38.839541  527485 cni.go:217] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0526 21:22:38.839547  527485 cni.go:222] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0526 21:22:38.839552  527485 start_flags.go:268] Found "CNI" CNI - setting NetworkPlugin=cni
	I0526 21:22:38.839560  527485 start_flags.go:273] config:
	{Name:multinode-20210526212238-510955 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.22-1620785771-11384@sha256:f5844fe35994179bbad8dda27d4912304a2fedccdf0bf93ce8b2ec2b3b83af1c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:multinode-20210526212238-510955 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:true}
	I0526 21:22:38.839645  527485 iso.go:123] acquiring lock: {Name:mkae6243686e006cb5174618a31875b12ffbed81 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0526 21:22:38.841622  527485 out.go:170] * Starting control plane node multinode-20210526212238-510955 in cluster multinode-20210526212238-510955
	I0526 21:22:38.841666  527485 preload.go:98] Checking if preload exists for k8s version v1.20.2 and runtime containerd
	I0526 21:22:38.841712  527485 preload.go:106] Found local preload: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.20.2-containerd-overlay2-amd64.tar.lz4
	I0526 21:22:38.841731  527485 cache.go:54] Caching tarball of preloaded images
	I0526 21:22:38.841861  527485 preload.go:143] Found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.20.2-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0526 21:22:38.841878  527485 cache.go:57] Finished verifying existence of preloaded tar for  v1.20.2 on containerd
	I0526 21:22:38.842834  527485 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/multinode-20210526212238-510955/config.json ...
	I0526 21:22:38.842875  527485 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/multinode-20210526212238-510955/config.json: {Name:mk78eec809dd8a578b82c2b088249ee76deae305 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0526 21:22:38.843042  527485 cache.go:191] Successfully downloaded all kic artifacts
	I0526 21:22:38.843079  527485 start.go:313] acquiring machines lock for multinode-20210526212238-510955: {Name:mk9b6c43d31e9eaa4b66476ed1274ba5b188c66b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0526 21:22:38.843149  527485 start.go:317] acquired machines lock for "multinode-20210526212238-510955" in 50.564µs
	I0526 21:22:38.843177  527485 start.go:89] Provisioning new machine with config: &{Name:multinode-20210526212238-510955 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.20.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.22-1620785771-11384@sha256:f5844fe35994179bbad8dda27d4912304a2fedccdf0bf93ce8b2ec2b3b83af1c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 Cluste
rName:multinode-20210526212238-510955 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:true} &{Name: IP: Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}
	I0526 21:22:38.843250  527485 start.go:126] createHost starting for "" (driver="kvm2")
	I0526 21:22:38.845057  527485 out.go:197] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0526 21:22:38.845179  527485 main.go:128] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0526 21:22:38.845238  527485 main.go:128] libmachine: Launching plugin server for driver kvm2
	I0526 21:22:38.855304  527485 main.go:128] libmachine: Plugin server listening at address 127.0.0.1:39059
	I0526 21:22:38.855713  527485 main.go:128] libmachine: () Calling .GetVersion
	I0526 21:22:38.856166  527485 main.go:128] libmachine: Using API Version  1
	I0526 21:22:38.856194  527485 main.go:128] libmachine: () Calling .SetConfigRaw
	I0526 21:22:38.856570  527485 main.go:128] libmachine: () Calling .GetMachineName
	I0526 21:22:38.856762  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetMachineName
	I0526 21:22:38.856925  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .DriverName
	I0526 21:22:38.857068  527485 start.go:160] libmachine.API.Create for "multinode-20210526212238-510955" (driver="kvm2")
	I0526 21:22:38.857096  527485 client.go:168] LocalClient.Create starting
	I0526 21:22:38.857130  527485 main.go:128] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/ca.pem
	I0526 21:22:38.857163  527485 main.go:128] libmachine: Decoding PEM data...
	I0526 21:22:38.857177  527485 main.go:128] libmachine: Parsing certificate...
	I0526 21:22:38.857308  527485 main.go:128] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/cert.pem
	I0526 21:22:38.857334  527485 main.go:128] libmachine: Decoding PEM data...
	I0526 21:22:38.857358  527485 main.go:128] libmachine: Parsing certificate...
	I0526 21:22:38.857413  527485 main.go:128] libmachine: Running pre-create checks...
	I0526 21:22:38.857427  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .PreCreateCheck
	I0526 21:22:38.857733  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetConfigRaw
	I0526 21:22:38.858132  527485 main.go:128] libmachine: Creating machine...
	I0526 21:22:38.858149  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .Create
	I0526 21:22:38.858272  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Creating KVM machine...
	I0526 21:22:38.860672  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | found existing default KVM network
	I0526 21:22:38.861463  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | I0526 21:22:38.861317  527509 network.go:263] reserving subnet 192.168.39.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.39.0:0xc0000965e8] misses:0}
	I0526 21:22:38.861499  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | I0526 21:22:38.861408  527509 network.go:210] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0526 21:22:38.895502  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | trying to create private KVM network mk-multinode-20210526212238-510955 192.168.39.0/24...
	I0526 21:22:39.133237  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | private KVM network mk-multinode-20210526212238-510955 192.168.39.0/24 created
	I0526 21:22:39.133270  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | I0526 21:22:39.133210  527509 common.go:101] Making disk image using store path: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube
	I0526 21:22:39.133284  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Setting up store path in /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/multinode-20210526212238-510955 ...
	I0526 21:22:39.133324  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Building disk image from file:///home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/iso/minikube-v1.20.0.iso
	I0526 21:22:39.133346  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Downloading /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/iso/minikube-v1.20.0.iso...
	I0526 21:22:39.318215  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | I0526 21:22:39.318063  527509 common.go:108] Creating ssh key: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/multinode-20210526212238-510955/id_rsa...
	I0526 21:22:39.382875  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | I0526 21:22:39.382772  527509 common.go:114] Creating raw disk image: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/multinode-20210526212238-510955/multinode-20210526212238-510955.rawdisk...
	I0526 21:22:39.382907  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | Writing magic tar header
	I0526 21:22:39.382921  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | Writing SSH key tar header
	I0526 21:22:39.382932  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | I0526 21:22:39.382878  527509 common.go:128] Fixing permissions on /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/multinode-20210526212238-510955 ...
	I0526 21:22:39.383065  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/multinode-20210526212238-510955
	I0526 21:22:39.383099  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/multinode-20210526212238-510955 (perms=drwx------)
	I0526 21:22:39.383117  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines
	I0526 21:22:39.383143  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube
	I0526 21:22:39.383159  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1
	I0526 21:22:39.383176  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0526 21:22:39.383193  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | Checking permissions on dir: /home/jenkins
	I0526 21:22:39.383212  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines (perms=drwxr-xr-x)
	I0526 21:22:39.383234  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube (perms=drwxr-xr-x)
	I0526 21:22:39.383250  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1 (perms=drwxr-xr-x)
	I0526 21:22:39.383264  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxr-xr-x)
	I0526 21:22:39.383275  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0526 21:22:39.383285  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | Checking permissions on dir: /home
	I0526 21:22:39.383298  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Creating domain...
	I0526 21:22:39.383311  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | Skipping /home - not owner
	I0526 21:22:39.409550  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | domain multinode-20210526212238-510955 has defined MAC address 52:54:00:d9:59:0d in network default
	I0526 21:22:39.410061  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Ensuring networks are active...
	I0526 21:22:39.410089  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | domain multinode-20210526212238-510955 has defined MAC address 52:54:00:0c:8b:34 in network mk-multinode-20210526212238-510955
	I0526 21:22:39.411924  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Ensuring network default is active
	I0526 21:22:39.412209  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Ensuring network mk-multinode-20210526212238-510955 is active
	I0526 21:22:39.412686  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Getting domain xml...
	I0526 21:22:39.414362  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Creating domain...
	I0526 21:22:39.766721  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Waiting to get IP...
	I0526 21:22:39.767397  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | domain multinode-20210526212238-510955 has defined MAC address 52:54:00:0c:8b:34 in network mk-multinode-20210526212238-510955
	I0526 21:22:39.767893  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | unable to find current IP address of domain multinode-20210526212238-510955 in network mk-multinode-20210526212238-510955
	I0526 21:22:39.767927  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | I0526 21:22:39.767865  527509 retry.go:31] will retry after 263.082536ms: waiting for machine to come up
	I0526 21:22:40.032058  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | domain multinode-20210526212238-510955 has defined MAC address 52:54:00:0c:8b:34 in network mk-multinode-20210526212238-510955
	I0526 21:22:40.032436  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | unable to find current IP address of domain multinode-20210526212238-510955 in network mk-multinode-20210526212238-510955
	I0526 21:22:40.032462  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | I0526 21:22:40.032386  527509 retry.go:31] will retry after 381.329545ms: waiting for machine to come up
	I0526 21:22:40.414793  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | domain multinode-20210526212238-510955 has defined MAC address 52:54:00:0c:8b:34 in network mk-multinode-20210526212238-510955
	I0526 21:22:40.415240  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | unable to find current IP address of domain multinode-20210526212238-510955 in network mk-multinode-20210526212238-510955
	I0526 21:22:40.415278  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | I0526 21:22:40.415198  527509 retry.go:31] will retry after 422.765636ms: waiting for machine to come up
	I0526 21:22:40.839646  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | domain multinode-20210526212238-510955 has defined MAC address 52:54:00:0c:8b:34 in network mk-multinode-20210526212238-510955
	I0526 21:22:40.840058  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | unable to find current IP address of domain multinode-20210526212238-510955 in network mk-multinode-20210526212238-510955
	I0526 21:22:40.840094  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | I0526 21:22:40.840006  527509 retry.go:31] will retry after 473.074753ms: waiting for machine to come up
	I0526 21:22:41.314603  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | domain multinode-20210526212238-510955 has defined MAC address 52:54:00:0c:8b:34 in network mk-multinode-20210526212238-510955
	I0526 21:22:41.315042  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | unable to find current IP address of domain multinode-20210526212238-510955 in network mk-multinode-20210526212238-510955
	I0526 21:22:41.315075  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | I0526 21:22:41.314986  527509 retry.go:31] will retry after 587.352751ms: waiting for machine to come up
	I0526 21:22:41.903598  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | domain multinode-20210526212238-510955 has defined MAC address 52:54:00:0c:8b:34 in network mk-multinode-20210526212238-510955
	I0526 21:22:41.903947  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | unable to find current IP address of domain multinode-20210526212238-510955 in network mk-multinode-20210526212238-510955
	I0526 21:22:41.903973  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | I0526 21:22:41.903891  527509 retry.go:31] will retry after 834.206799ms: waiting for machine to come up
	I0526 21:22:42.739982  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | domain multinode-20210526212238-510955 has defined MAC address 52:54:00:0c:8b:34 in network mk-multinode-20210526212238-510955
	I0526 21:22:42.740267  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | unable to find current IP address of domain multinode-20210526212238-510955 in network mk-multinode-20210526212238-510955
	I0526 21:22:42.740303  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | I0526 21:22:42.740242  527509 retry.go:31] will retry after 746.553905ms: waiting for machine to come up
	I0526 21:22:43.488080  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | domain multinode-20210526212238-510955 has defined MAC address 52:54:00:0c:8b:34 in network mk-multinode-20210526212238-510955
	I0526 21:22:43.488537  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | unable to find current IP address of domain multinode-20210526212238-510955 in network mk-multinode-20210526212238-510955
	I0526 21:22:43.488564  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | I0526 21:22:43.488510  527509 retry.go:31] will retry after 987.362415ms: waiting for machine to come up
	I0526 21:22:44.477090  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | domain multinode-20210526212238-510955 has defined MAC address 52:54:00:0c:8b:34 in network mk-multinode-20210526212238-510955
	I0526 21:22:44.477458  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | unable to find current IP address of domain multinode-20210526212238-510955 in network mk-multinode-20210526212238-510955
	I0526 21:22:44.477489  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | I0526 21:22:44.477406  527509 retry.go:31] will retry after 1.189835008s: waiting for machine to come up
	I0526 21:22:45.668795  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | domain multinode-20210526212238-510955 has defined MAC address 52:54:00:0c:8b:34 in network mk-multinode-20210526212238-510955
	I0526 21:22:45.669147  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | unable to find current IP address of domain multinode-20210526212238-510955 in network mk-multinode-20210526212238-510955
	I0526 21:22:45.669182  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | I0526 21:22:45.669085  527509 retry.go:31] will retry after 1.677229867s: waiting for machine to come up
	I0526 21:22:47.348770  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | domain multinode-20210526212238-510955 has defined MAC address 52:54:00:0c:8b:34 in network mk-multinode-20210526212238-510955
	I0526 21:22:47.349176  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | unable to find current IP address of domain multinode-20210526212238-510955 in network mk-multinode-20210526212238-510955
	I0526 21:22:47.349207  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | I0526 21:22:47.349116  527509 retry.go:31] will retry after 2.346016261s: waiting for machine to come up
	I0526 21:22:49.696210  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | domain multinode-20210526212238-510955 has defined MAC address 52:54:00:0c:8b:34 in network mk-multinode-20210526212238-510955
	I0526 21:22:49.696624  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | unable to find current IP address of domain multinode-20210526212238-510955 in network mk-multinode-20210526212238-510955
	I0526 21:22:49.696659  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | I0526 21:22:49.696561  527509 retry.go:31] will retry after 3.36678925s: waiting for machine to come up
	I0526 21:22:53.067037  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | domain multinode-20210526212238-510955 has defined MAC address 52:54:00:0c:8b:34 in network mk-multinode-20210526212238-510955
	I0526 21:22:53.067462  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | unable to find current IP address of domain multinode-20210526212238-510955 in network mk-multinode-20210526212238-510955
	I0526 21:22:53.067498  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | I0526 21:22:53.067402  527509 retry.go:31] will retry after 3.11822781s: waiting for machine to come up
	I0526 21:22:56.188960  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | domain multinode-20210526212238-510955 has defined MAC address 52:54:00:0c:8b:34 in network mk-multinode-20210526212238-510955
	I0526 21:22:56.189444  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Found IP for machine: 192.168.39.229
	I0526 21:22:56.189475  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | domain multinode-20210526212238-510955 has current primary IP address 192.168.39.229 and MAC address 52:54:00:0c:8b:34 in network mk-multinode-20210526212238-510955
	I0526 21:22:56.189486  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Reserving static IP address...
	I0526 21:22:56.189744  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | unable to find host DHCP lease matching {name: "multinode-20210526212238-510955", mac: "52:54:00:0c:8b:34", ip: "192.168.39.229"} in network mk-multinode-20210526212238-510955
	I0526 21:22:56.237513  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Reserved static IP address: 192.168.39.229
	I0526 21:22:56.237543  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Waiting for SSH to be available...
	I0526 21:22:56.237554  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | Getting to WaitForSSH function...
	I0526 21:22:56.242739  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | domain multinode-20210526212238-510955 has defined MAC address 52:54:00:0c:8b:34 in network mk-multinode-20210526212238-510955
	I0526 21:22:56.243048  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:8b:34", ip: ""} in network mk-multinode-20210526212238-510955: {Iface:virbr1 ExpiryTime:2021-05-26 22:22:53 +0000 UTC Type:0 Mac:52:54:00:0c:8b:34 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:minikube Clientid:01:52:54:00:0c:8b:34}
	I0526 21:22:56.243083  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | domain multinode-20210526212238-510955 has defined IP address 192.168.39.229 and MAC address 52:54:00:0c:8b:34 in network mk-multinode-20210526212238-510955
	I0526 21:22:56.243167  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | Using SSH client type: external
	I0526 21:22:56.243197  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | Using SSH private key: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/multinode-20210526212238-510955/id_rsa (-rw-------)
	I0526 21:22:56.243239  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.229 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/multinode-20210526212238-510955/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0526 21:22:56.243273  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | About to run SSH command:
	I0526 21:22:56.243284  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | exit 0
	I0526 21:22:56.376672  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | SSH cmd err, output: <nil>: 
	I0526 21:22:56.377111  527485 main.go:128] libmachine: (multinode-20210526212238-510955) KVM machine creation complete!
	I0526 21:22:56.377180  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetConfigRaw
	I0526 21:22:56.377707  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .DriverName
	I0526 21:22:56.377887  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .DriverName
	I0526 21:22:56.378034  527485 main.go:128] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0526 21:22:56.378052  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetState
	I0526 21:22:56.380329  527485 main.go:128] libmachine: Detecting operating system of created instance...
	I0526 21:22:56.380348  527485 main.go:128] libmachine: Waiting for SSH to be available...
	I0526 21:22:56.380357  527485 main.go:128] libmachine: Getting to WaitForSSH function...
	I0526 21:22:56.380367  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetSSHHostname
	I0526 21:22:56.384748  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | domain multinode-20210526212238-510955 has defined MAC address 52:54:00:0c:8b:34 in network mk-multinode-20210526212238-510955
	I0526 21:22:56.385102  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:8b:34", ip: ""} in network mk-multinode-20210526212238-510955: {Iface:virbr1 ExpiryTime:2021-05-26 22:22:53 +0000 UTC Type:0 Mac:52:54:00:0c:8b:34 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:multinode-20210526212238-510955 Clientid:01:52:54:00:0c:8b:34}
	I0526 21:22:56.385137  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | domain multinode-20210526212238-510955 has defined IP address 192.168.39.229 and MAC address 52:54:00:0c:8b:34 in network mk-multinode-20210526212238-510955
	I0526 21:22:56.385197  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetSSHPort
	I0526 21:22:56.385400  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetSSHKeyPath
	I0526 21:22:56.385559  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetSSHKeyPath
	I0526 21:22:56.385683  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetSSHUsername
	I0526 21:22:56.385794  527485 main.go:128] libmachine: Using SSH client type: native
	I0526 21:22:56.385998  527485 main.go:128] libmachine: &{{{<nil> 0 [] [] []} docker [0x802c00] 0x802bc0 <nil>  [] 0s} 192.168.39.229 22 <nil> <nil>}
	I0526 21:22:56.386014  527485 main.go:128] libmachine: About to run SSH command:
	exit 0
	I0526 21:22:56.503895  527485 main.go:128] libmachine: SSH cmd err, output: <nil>: 
	I0526 21:22:56.503914  527485 main.go:128] libmachine: Detecting the provisioner...
	I0526 21:22:56.503922  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetSSHHostname
	I0526 21:22:56.508753  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | domain multinode-20210526212238-510955 has defined MAC address 52:54:00:0c:8b:34 in network mk-multinode-20210526212238-510955
	I0526 21:22:56.509063  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:8b:34", ip: ""} in network mk-multinode-20210526212238-510955: {Iface:virbr1 ExpiryTime:2021-05-26 22:22:53 +0000 UTC Type:0 Mac:52:54:00:0c:8b:34 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:multinode-20210526212238-510955 Clientid:01:52:54:00:0c:8b:34}
	I0526 21:22:56.509091  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | domain multinode-20210526212238-510955 has defined IP address 192.168.39.229 and MAC address 52:54:00:0c:8b:34 in network mk-multinode-20210526212238-510955
	I0526 21:22:56.509237  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetSSHPort
	I0526 21:22:56.509419  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetSSHKeyPath
	I0526 21:22:56.509570  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetSSHKeyPath
	I0526 21:22:56.509670  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetSSHUsername
	I0526 21:22:56.509797  527485 main.go:128] libmachine: Using SSH client type: native
	I0526 21:22:56.509953  527485 main.go:128] libmachine: &{{{<nil> 0 [] [] []} docker [0x802c00] 0x802bc0 <nil>  [] 0s} 192.168.39.229 22 <nil> <nil>}
	I0526 21:22:56.509972  527485 main.go:128] libmachine: About to run SSH command:
	cat /etc/os-release
	I0526 21:22:56.626000  527485 main.go:128] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2020.02.12
	ID=buildroot
	VERSION_ID=2020.02.12
	PRETTY_NAME="Buildroot 2020.02.12"
	
	I0526 21:22:56.626062  527485 main.go:128] libmachine: found compatible host: buildroot
	I0526 21:22:56.626078  527485 main.go:128] libmachine: Provisioning with buildroot...
	I0526 21:22:56.626088  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetMachineName
	I0526 21:22:56.626246  527485 buildroot.go:166] provisioning hostname "multinode-20210526212238-510955"
	I0526 21:22:56.626274  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetMachineName
	I0526 21:22:56.626456  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetSSHHostname
	I0526 21:22:56.630680  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | domain multinode-20210526212238-510955 has defined MAC address 52:54:00:0c:8b:34 in network mk-multinode-20210526212238-510955
	I0526 21:22:56.630962  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:8b:34", ip: ""} in network mk-multinode-20210526212238-510955: {Iface:virbr1 ExpiryTime:2021-05-26 22:22:53 +0000 UTC Type:0 Mac:52:54:00:0c:8b:34 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:multinode-20210526212238-510955 Clientid:01:52:54:00:0c:8b:34}
	I0526 21:22:56.630991  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | domain multinode-20210526212238-510955 has defined IP address 192.168.39.229 and MAC address 52:54:00:0c:8b:34 in network mk-multinode-20210526212238-510955
	I0526 21:22:56.631098  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetSSHPort
	I0526 21:22:56.631280  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetSSHKeyPath
	I0526 21:22:56.631439  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetSSHKeyPath
	I0526 21:22:56.631564  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetSSHUsername
	I0526 21:22:56.631708  527485 main.go:128] libmachine: Using SSH client type: native
	I0526 21:22:56.631859  527485 main.go:128] libmachine: &{{{<nil> 0 [] [] []} docker [0x802c00] 0x802bc0 <nil>  [] 0s} 192.168.39.229 22 <nil> <nil>}
	I0526 21:22:56.631875  527485 main.go:128] libmachine: About to run SSH command:
	sudo hostname multinode-20210526212238-510955 && echo "multinode-20210526212238-510955" | sudo tee /etc/hostname
	I0526 21:22:56.756884  527485 main.go:128] libmachine: SSH cmd err, output: <nil>: multinode-20210526212238-510955
	
	I0526 21:22:56.756910  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetSSHHostname
	I0526 21:22:56.761538  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | domain multinode-20210526212238-510955 has defined MAC address 52:54:00:0c:8b:34 in network mk-multinode-20210526212238-510955
	I0526 21:22:56.761862  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:8b:34", ip: ""} in network mk-multinode-20210526212238-510955: {Iface:virbr1 ExpiryTime:2021-05-26 22:22:53 +0000 UTC Type:0 Mac:52:54:00:0c:8b:34 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:multinode-20210526212238-510955 Clientid:01:52:54:00:0c:8b:34}
	I0526 21:22:56.761885  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | domain multinode-20210526212238-510955 has defined IP address 192.168.39.229 and MAC address 52:54:00:0c:8b:34 in network mk-multinode-20210526212238-510955
	I0526 21:22:56.762049  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetSSHPort
	I0526 21:22:56.762210  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetSSHKeyPath
	I0526 21:22:56.762353  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetSSHKeyPath
	I0526 21:22:56.762480  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetSSHUsername
	I0526 21:22:56.762641  527485 main.go:128] libmachine: Using SSH client type: native
	I0526 21:22:56.762804  527485 main.go:128] libmachine: &{{{<nil> 0 [] [] []} docker [0x802c00] 0x802bc0 <nil>  [] 0s} 192.168.39.229 22 <nil> <nil>}
	I0526 21:22:56.762834  527485 main.go:128] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-20210526212238-510955' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-20210526212238-510955/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-20210526212238-510955' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0526 21:22:56.883834  527485 main.go:128] libmachine: SSH cmd err, output: <nil>: 
	I0526 21:22:56.883870  527485 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikub
e/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube}
	I0526 21:22:56.883898  527485 buildroot.go:174] setting up certificates
	I0526 21:22:56.883908  527485 provision.go:83] configureAuth start
	I0526 21:22:56.883920  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetMachineName
	I0526 21:22:56.884105  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetIP
	I0526 21:22:56.888712  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | domain multinode-20210526212238-510955 has defined MAC address 52:54:00:0c:8b:34 in network mk-multinode-20210526212238-510955
	I0526 21:22:56.889022  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:8b:34", ip: ""} in network mk-multinode-20210526212238-510955: {Iface:virbr1 ExpiryTime:2021-05-26 22:22:53 +0000 UTC Type:0 Mac:52:54:00:0c:8b:34 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:multinode-20210526212238-510955 Clientid:01:52:54:00:0c:8b:34}
	I0526 21:22:56.889065  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | domain multinode-20210526212238-510955 has defined IP address 192.168.39.229 and MAC address 52:54:00:0c:8b:34 in network mk-multinode-20210526212238-510955
	I0526 21:22:56.889177  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetSSHHostname
	I0526 21:22:56.893241  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | domain multinode-20210526212238-510955 has defined MAC address 52:54:00:0c:8b:34 in network mk-multinode-20210526212238-510955
	I0526 21:22:56.893569  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:8b:34", ip: ""} in network mk-multinode-20210526212238-510955: {Iface:virbr1 ExpiryTime:2021-05-26 22:22:53 +0000 UTC Type:0 Mac:52:54:00:0c:8b:34 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:multinode-20210526212238-510955 Clientid:01:52:54:00:0c:8b:34}
	I0526 21:22:56.893605  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | domain multinode-20210526212238-510955 has defined IP address 192.168.39.229 and MAC address 52:54:00:0c:8b:34 in network mk-multinode-20210526212238-510955
	I0526 21:22:56.893664  527485 provision.go:137] copyHostCerts
	I0526 21:22:56.893690  527485 vm_assets.go:98] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/ca.pem
	I0526 21:22:56.893734  527485 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/ca.pem, removing ...
	I0526 21:22:56.893749  527485 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/ca.pem
	I0526 21:22:56.893806  527485 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/ca.pem (1078 bytes)
	I0526 21:22:56.893908  527485 vm_assets.go:98] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cert.pem
	I0526 21:22:56.893940  527485 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cert.pem, removing ...
	I0526 21:22:56.893948  527485 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cert.pem
	I0526 21:22:56.893980  527485 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cert.pem (1123 bytes)
	I0526 21:22:56.894036  527485 vm_assets.go:98] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/key.pem
	I0526 21:22:56.894063  527485 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/key.pem, removing ...
	I0526 21:22:56.894074  527485 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/key.pem
	I0526 21:22:56.894104  527485 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/key.pem (1679 bytes)
	I0526 21:22:56.894160  527485 provision.go:111] generating server cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/ca-key.pem org=jenkins.multinode-20210526212238-510955 san=[192.168.39.229 192.168.39.229 localhost 127.0.0.1 minikube multinode-20210526212238-510955]
	I0526 21:22:57.293529  527485 provision.go:171] copyRemoteCerts
	I0526 21:22:57.293605  527485 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0526 21:22:57.293638  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetSSHHostname
	I0526 21:22:57.298962  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | domain multinode-20210526212238-510955 has defined MAC address 52:54:00:0c:8b:34 in network mk-multinode-20210526212238-510955
	I0526 21:22:57.299286  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:8b:34", ip: ""} in network mk-multinode-20210526212238-510955: {Iface:virbr1 ExpiryTime:2021-05-26 22:22:53 +0000 UTC Type:0 Mac:52:54:00:0c:8b:34 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:multinode-20210526212238-510955 Clientid:01:52:54:00:0c:8b:34}
	I0526 21:22:57.299319  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | domain multinode-20210526212238-510955 has defined IP address 192.168.39.229 and MAC address 52:54:00:0c:8b:34 in network mk-multinode-20210526212238-510955
	I0526 21:22:57.299485  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetSSHPort
	I0526 21:22:57.299697  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetSSHKeyPath
	I0526 21:22:57.299864  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetSSHUsername
	I0526 21:22:57.299966  527485 sshutil.go:53] new ssh client: &{IP:192.168.39.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/multinode-20210526212238-510955/id_rsa Username:docker}
	I0526 21:22:57.383753  527485 vm_assets.go:98] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0526 21:22:57.383820  527485 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0526 21:22:57.400655  527485 vm_assets.go:98] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0526 21:22:57.400704  527485 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes)
	I0526 21:22:57.417361  527485 vm_assets.go:98] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0526 21:22:57.417400  527485 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0526 21:22:57.433924  527485 provision.go:86] duration metric: configureAuth took 550.003144ms
	I0526 21:22:57.433943  527485 buildroot.go:189] setting minikube options for container-runtime
	I0526 21:22:57.434087  527485 main.go:128] libmachine: Checking connection to Docker...
	I0526 21:22:57.434102  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetURL
	I0526 21:22:57.436663  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | Using libvirt version 3000000
	I0526 21:22:57.441125  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | domain multinode-20210526212238-510955 has defined MAC address 52:54:00:0c:8b:34 in network mk-multinode-20210526212238-510955
	I0526 21:22:57.441437  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:8b:34", ip: ""} in network mk-multinode-20210526212238-510955: {Iface:virbr1 ExpiryTime:2021-05-26 22:22:53 +0000 UTC Type:0 Mac:52:54:00:0c:8b:34 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:multinode-20210526212238-510955 Clientid:01:52:54:00:0c:8b:34}
	I0526 21:22:57.441474  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | domain multinode-20210526212238-510955 has defined IP address 192.168.39.229 and MAC address 52:54:00:0c:8b:34 in network mk-multinode-20210526212238-510955
	I0526 21:22:57.441576  527485 main.go:128] libmachine: Docker is up and running!
	I0526 21:22:57.441595  527485 main.go:128] libmachine: Reticulating splines...
	I0526 21:22:57.441603  527485 client.go:171] LocalClient.Create took 18.584500055s
	I0526 21:22:57.441621  527485 start.go:168] duration metric: libmachine.API.Create for "multinode-20210526212238-510955" took 18.584554789s
	I0526 21:22:57.441647  527485 start.go:267] post-start starting for "multinode-20210526212238-510955" (driver="kvm2")
	I0526 21:22:57.441652  527485 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0526 21:22:57.441664  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .DriverName
	I0526 21:22:57.441876  527485 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0526 21:22:57.441900  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetSSHHostname
	I0526 21:22:57.445895  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | domain multinode-20210526212238-510955 has defined MAC address 52:54:00:0c:8b:34 in network mk-multinode-20210526212238-510955
	I0526 21:22:57.446135  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:8b:34", ip: ""} in network mk-multinode-20210526212238-510955: {Iface:virbr1 ExpiryTime:2021-05-26 22:22:53 +0000 UTC Type:0 Mac:52:54:00:0c:8b:34 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:multinode-20210526212238-510955 Clientid:01:52:54:00:0c:8b:34}
	I0526 21:22:57.446157  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | domain multinode-20210526212238-510955 has defined IP address 192.168.39.229 and MAC address 52:54:00:0c:8b:34 in network mk-multinode-20210526212238-510955
	I0526 21:22:57.446277  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetSSHPort
	I0526 21:22:57.446442  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetSSHKeyPath
	I0526 21:22:57.446598  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetSSHUsername
	I0526 21:22:57.446750  527485 sshutil.go:53] new ssh client: &{IP:192.168.39.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/multinode-20210526212238-510955/id_rsa Username:docker}
	I0526 21:22:57.531400  527485 ssh_runner.go:149] Run: cat /etc/os-release
	I0526 21:22:57.535466  527485 command_runner.go:124] > NAME=Buildroot
	I0526 21:22:57.535481  527485 command_runner.go:124] > VERSION=2020.02.12
	I0526 21:22:57.535485  527485 command_runner.go:124] > ID=buildroot
	I0526 21:22:57.535490  527485 command_runner.go:124] > VERSION_ID=2020.02.12
	I0526 21:22:57.535495  527485 command_runner.go:124] > PRETTY_NAME="Buildroot 2020.02.12"
	I0526 21:22:57.535526  527485 info.go:137] Remote host: Buildroot 2020.02.12
	I0526 21:22:57.535553  527485 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/addons for local assets ...
	I0526 21:22:57.535596  527485 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/files for local assets ...
	I0526 21:22:57.535738  527485 start.go:270] post-start completed in 94.085921ms
	I0526 21:22:57.535784  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetConfigRaw
	I0526 21:22:57.536236  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetIP
	I0526 21:22:57.540471  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | domain multinode-20210526212238-510955 has defined MAC address 52:54:00:0c:8b:34 in network mk-multinode-20210526212238-510955
	I0526 21:22:57.540741  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:8b:34", ip: ""} in network mk-multinode-20210526212238-510955: {Iface:virbr1 ExpiryTime:2021-05-26 22:22:53 +0000 UTC Type:0 Mac:52:54:00:0c:8b:34 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:multinode-20210526212238-510955 Clientid:01:52:54:00:0c:8b:34}
	I0526 21:22:57.540771  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | domain multinode-20210526212238-510955 has defined IP address 192.168.39.229 and MAC address 52:54:00:0c:8b:34 in network mk-multinode-20210526212238-510955
	I0526 21:22:57.541002  527485 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/multinode-20210526212238-510955/config.json ...
	I0526 21:22:57.541144  527485 start.go:129] duration metric: createHost completed in 18.697885597s
	I0526 21:22:57.541156  527485 start.go:80] releasing machines lock for "multinode-20210526212238-510955", held for 18.697995329s
	I0526 21:22:57.541186  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .DriverName
	I0526 21:22:57.541336  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetIP
	I0526 21:22:57.545449  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | domain multinode-20210526212238-510955 has defined MAC address 52:54:00:0c:8b:34 in network mk-multinode-20210526212238-510955
	I0526 21:22:57.545746  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:8b:34", ip: ""} in network mk-multinode-20210526212238-510955: {Iface:virbr1 ExpiryTime:2021-05-26 22:22:53 +0000 UTC Type:0 Mac:52:54:00:0c:8b:34 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:multinode-20210526212238-510955 Clientid:01:52:54:00:0c:8b:34}
	I0526 21:22:57.545767  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | domain multinode-20210526212238-510955 has defined IP address 192.168.39.229 and MAC address 52:54:00:0c:8b:34 in network mk-multinode-20210526212238-510955
	I0526 21:22:57.545902  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .DriverName
	I0526 21:22:57.546065  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .DriverName
	I0526 21:22:57.546504  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .DriverName
	I0526 21:22:57.546703  527485 ssh_runner.go:149] Run: systemctl --version
	I0526 21:22:57.546724  527485 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0526 21:22:57.546730  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetSSHHostname
	I0526 21:22:57.546752  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetSSHHostname
	I0526 21:22:57.553878  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | domain multinode-20210526212238-510955 has defined MAC address 52:54:00:0c:8b:34 in network mk-multinode-20210526212238-510955
	I0526 21:22:57.553987  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | domain multinode-20210526212238-510955 has defined MAC address 52:54:00:0c:8b:34 in network mk-multinode-20210526212238-510955
	I0526 21:22:57.554209  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:8b:34", ip: ""} in network mk-multinode-20210526212238-510955: {Iface:virbr1 ExpiryTime:2021-05-26 22:22:53 +0000 UTC Type:0 Mac:52:54:00:0c:8b:34 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:multinode-20210526212238-510955 Clientid:01:52:54:00:0c:8b:34}
	I0526 21:22:57.554238  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | domain multinode-20210526212238-510955 has defined IP address 192.168.39.229 and MAC address 52:54:00:0c:8b:34 in network mk-multinode-20210526212238-510955
	I0526 21:22:57.554268  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:8b:34", ip: ""} in network mk-multinode-20210526212238-510955: {Iface:virbr1 ExpiryTime:2021-05-26 22:22:53 +0000 UTC Type:0 Mac:52:54:00:0c:8b:34 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:multinode-20210526212238-510955 Clientid:01:52:54:00:0c:8b:34}
	I0526 21:22:57.554288  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | domain multinode-20210526212238-510955 has defined IP address 192.168.39.229 and MAC address 52:54:00:0c:8b:34 in network mk-multinode-20210526212238-510955
	I0526 21:22:57.554345  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetSSHPort
	I0526 21:22:57.554500  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetSSHKeyPath
	I0526 21:22:57.554502  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetSSHPort
	I0526 21:22:57.554655  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetSSHKeyPath
	I0526 21:22:57.554656  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetSSHUsername
	I0526 21:22:57.554817  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetSSHUsername
	I0526 21:22:57.554839  527485 sshutil.go:53] new ssh client: &{IP:192.168.39.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/multinode-20210526212238-510955/id_rsa Username:docker}
	I0526 21:22:57.554926  527485 sshutil.go:53] new ssh client: &{IP:192.168.39.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/multinode-20210526212238-510955/id_rsa Username:docker}
	I0526 21:22:57.638446  527485 command_runner.go:124] > systemd 244 (244)
	I0526 21:22:57.638487  527485 command_runner.go:124] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK +SYSVINIT +UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I0526 21:22:57.638513  527485 preload.go:98] Checking if preload exists for k8s version v1.20.2 and runtime containerd
	I0526 21:22:57.638549  527485 preload.go:106] Found local preload: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.20.2-containerd-overlay2-amd64.tar.lz4
	I0526 21:22:57.638599  527485 ssh_runner.go:149] Run: sudo crictl images --output json
	I0526 21:22:57.663848  527485 command_runner.go:124] > <HTML><HEAD><meta http-equiv="content-type" content="text/html;charset=utf-8">
	I0526 21:22:57.663864  527485 command_runner.go:124] > <TITLE>302 Moved</TITLE></HEAD><BODY>
	I0526 21:22:57.663870  527485 command_runner.go:124] > <H1>302 Moved</H1>
	I0526 21:22:57.663880  527485 command_runner.go:124] > The document has moved
	I0526 21:22:57.663889  527485 command_runner.go:124] > <A HREF="https://cloud.google.com/container-registry/">here</A>.
	I0526 21:22:57.663901  527485 command_runner.go:124] > </BODY></HTML>
	I0526 21:23:01.640010  527485 command_runner.go:124] > {
	I0526 21:23:01.640032  527485 command_runner.go:124] >   "images": [
	I0526 21:23:01.640039  527485 command_runner.go:124] >   ]
	I0526 21:23:01.640043  527485 command_runner.go:124] > }
	I0526 21:23:01.640717  527485 command_runner.go:124] ! time="2021-05-26T21:22:57Z" level=warning msg="image connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock]. As the default settings are now deprecated, you should set the endpoint instead."
	I0526 21:23:01.640757  527485 command_runner.go:124] ! time="2021-05-26T21:22:59Z" level=error msg="connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	I0526 21:23:01.640773  527485 command_runner.go:124] ! time="2021-05-26T21:23:01Z" level=error msg="connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	I0526 21:23:01.640790  527485 ssh_runner.go:189] Completed: sudo crictl images --output json: (4.002175504s)
	I0526 21:23:01.640891  527485 containerd.go:566] couldn't find preloaded image for "k8s.gcr.io/kube-apiserver:v1.20.2". assuming images are not preloaded.
	I0526 21:23:01.640946  527485 ssh_runner.go:149] Run: which lz4
	I0526 21:23:01.644596  527485 command_runner.go:124] > /bin/lz4
	I0526 21:23:01.644955  527485 vm_assets.go:98] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.20.2-containerd-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0526 21:23:01.645027  527485 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0526 21:23:01.649182  527485 command_runner.go:124] ! stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	I0526 21:23:01.649225  527485 ssh_runner.go:306] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	I0526 21:23:01.649244  527485 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.20.2-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (953722271 bytes)
	I0526 21:23:05.598453  527485 containerd.go:503] Took 3.953446 seconds to copy over tarball
	I0526 21:23:05.598520  527485 ssh_runner.go:149] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0526 21:23:12.109109  527485 ssh_runner.go:189] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (6.510561228s)
	I0526 21:23:12.109141  527485 containerd.go:510] Took 6.510657 seconds t extract the tarball
	I0526 21:23:12.109179  527485 ssh_runner.go:100] rm: /preloaded.tar.lz4
	I0526 21:23:12.170392  527485 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0526 21:23:12.334202  527485 ssh_runner.go:149] Run: sudo systemctl restart containerd
	I0526 21:23:12.374924  527485 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0526 21:23:12.384855  527485 ssh_runner.go:149] Run: sudo systemctl stop -f crio
	I0526 21:23:12.414104  527485 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0526 21:23:12.429040  527485 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0526 21:23:12.438147  527485 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0526 21:23:12.451339  527485 command_runner.go:124] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0526 21:23:12.451364  527485 command_runner.go:124] > image-endpoint: unix:///run/containerd/containerd.sock
	I0526 21:23:12.451502  527485 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "cm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKCltncnBjXQogIGFkZHJlc3MgPSAiL3J1bi9jb250YWluZXJkL2NvbnRhaW5lcmQuc29jayIKICB1aWQgPSAwCiAgZ2lkID0gMAogIG1heF9yZWN2X21lc3NhZ2Vfc2l6ZSA9IDE2Nzc3MjE2CiAgbWF4X3NlbmRfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKCltkZWJ1Z10KICBhZGRyZXNzID0gIiIKICB1aWQgPSAwCiAgZ2lkID0gMAogIGxldmVsID0gIiIKClttZXRyaWNzXQogIGFkZHJlc3MgPSAiIgogIGdycGNfaGlzdG9ncmFtID0gZmFsc2UKCltjZ3JvdXBdCiAgcGF0aCA9ICIiCgpbcGx1Z2luc10KICBbcGx1Z2lucy5jZ3JvdXBzXQogICAgbm9fcHJvbWV0aGV1cyA9IGZhbHNlCiAgW3BsdWdpbnMuY3JpXQogICAgc3RyZWFtX3NlcnZlcl9hZGRyZXNzID0gIiIKICAgIHN0cmVhbV9zZXJ2ZXJfcG9ydCA9ICIxMDAxMCIKICAgIGVuYWJsZV9zZWxpbnV4ID0gZmFsc2UKICAgIHNhbmRib3hfaW1hZ2UgPSAiazhzLmdjci5pby9wYXVzZTozLjIiCiAgICBzdGF0c19jb2xsZWN0X3BlcmlvZCA9IDEwCiAgICBzeXN0ZW1kX2Nncm91cCA9IGZhbHNlCiAgICBlbmFibGVfdGxzX3N0cmVhbWluZyA9IGZhbHNlCiAgICBtYXhfY29udGFpbmVyX2xvZ19saW5lX3Npe
mUgPSAxNjM4NAogICAgW3BsdWdpbnMuY3JpLmNvbnRhaW5lcmRdCiAgICAgIHNuYXBzaG90dGVyID0gIm92ZXJsYXlmcyIKICAgICAgW3BsdWdpbnMuY3JpLmNvbnRhaW5lcmQuZGVmYXVsdF9ydW50aW1lXQogICAgICAgIHJ1bnRpbWVfdHlwZSA9ICJpby5jb250YWluZXJkLnJ1bmMudjIiCiAgICAgICAgW3BsdWdpbnMuY3JpLmNvbnRhaW5lcmQuZGVmYXVsdF9ydW50aW1lLm9wdGlvbnNdCiAgICAgICAgICBOb1Bpdm90Um9vdCA9IHRydWUKICAgICAgW3BsdWdpbnMuY3JpLmNvbnRhaW5lcmQudW50cnVzdGVkX3dvcmtsb2FkX3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gIiIKICAgICAgICBydW50aW1lX2VuZ2luZSA9ICIiCiAgICAgICAgcnVudGltZV9yb290ID0gIiIKICAgIFtwbHVnaW5zLmNyaS5jbmldCiAgICAgIGJpbl9kaXIgPSAiL29wdC9jbmkvYmluIgogICAgICBjb25mX2RpciA9ICIvZXRjL2NuaS9uZXQubWsiCiAgICAgIGNvbmZfdGVtcGxhdGUgPSAiIgogICAgW3BsdWdpbnMuY3JpLnJlZ2lzdHJ5XQogICAgICBbcGx1Z2lucy5jcmkucmVnaXN0cnkubWlycm9yc10KICAgICAgICBbcGx1Z2lucy5jcmkucmVnaXN0cnkubWlycm9ycy4iZG9ja2VyLmlvIl0KICAgICAgICAgIGVuZHBvaW50ID0gWyJodHRwczovL3JlZ2lzdHJ5LTEuZG9ja2VyLmlvIl0KICAgICAgICBbcGx1Z2lucy5kaWZmLXNlcnZpY2VdCiAgICBkZWZhdWx0ID0gWyJ3YWxraW5nIl0KICBbcGx1Z2lucy5zY2hlZHVsZXJdCiAgICBwYXVzZV90aHJlc2hvb
GQgPSAwLjAyCiAgICBkZWxldGlvbl90aHJlc2hvbGQgPSAwCiAgICBtdXRhdGlvbl90aHJlc2hvbGQgPSAxMDAKICAgIHNjaGVkdWxlX2RlbGF5ID0gIjBzIgogICAgc3RhcnR1cF9kZWxheSA9ICIxMDBtcyIK" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0526 21:23:12.468915  527485 command_runner.go:124] > root = "/var/lib/containerd"
	I0526 21:23:12.468936  527485 command_runner.go:124] > state = "/run/containerd"
	I0526 21:23:12.468944  527485 command_runner.go:124] > oom_score = 0
	I0526 21:23:12.468949  527485 command_runner.go:124] > [grpc]
	I0526 21:23:12.468957  527485 command_runner.go:124] >   address = "/run/containerd/containerd.sock"
	I0526 21:23:12.468963  527485 command_runner.go:124] >   uid = 0
	I0526 21:23:12.468969  527485 command_runner.go:124] >   gid = 0
	I0526 21:23:12.468976  527485 command_runner.go:124] >   max_recv_message_size = 16777216
	I0526 21:23:12.468985  527485 command_runner.go:124] >   max_send_message_size = 16777216
	I0526 21:23:12.468990  527485 command_runner.go:124] > [debug]
	I0526 21:23:12.468998  527485 command_runner.go:124] >   address = ""
	I0526 21:23:12.469003  527485 command_runner.go:124] >   uid = 0
	I0526 21:23:12.469010  527485 command_runner.go:124] >   gid = 0
	I0526 21:23:12.469017  527485 command_runner.go:124] >   level = ""
	I0526 21:23:12.469025  527485 command_runner.go:124] > [metrics]
	I0526 21:23:12.469031  527485 command_runner.go:124] >   address = ""
	I0526 21:23:12.469040  527485 command_runner.go:124] >   grpc_histogram = false
	I0526 21:23:12.469046  527485 command_runner.go:124] > [cgroup]
	I0526 21:23:12.469052  527485 command_runner.go:124] >   path = ""
	I0526 21:23:12.469058  527485 command_runner.go:124] > [plugins]
	I0526 21:23:12.469065  527485 command_runner.go:124] >   [plugins.cgroups]
	I0526 21:23:12.469076  527485 command_runner.go:124] >     no_prometheus = false
	I0526 21:23:12.469084  527485 command_runner.go:124] >   [plugins.cri]
	I0526 21:23:12.469090  527485 command_runner.go:124] >     stream_server_address = ""
	I0526 21:23:12.469101  527485 command_runner.go:124] >     stream_server_port = "10010"
	I0526 21:23:12.469108  527485 command_runner.go:124] >     enable_selinux = false
	I0526 21:23:12.469118  527485 command_runner.go:124] >     sandbox_image = "k8s.gcr.io/pause:3.2"
	I0526 21:23:12.469126  527485 command_runner.go:124] >     stats_collect_period = 10
	I0526 21:23:12.469133  527485 command_runner.go:124] >     systemd_cgroup = false
	I0526 21:23:12.469144  527485 command_runner.go:124] >     enable_tls_streaming = false
	I0526 21:23:12.469151  527485 command_runner.go:124] >     max_container_log_line_size = 16384
	I0526 21:23:12.469158  527485 command_runner.go:124] >     [plugins.cri.containerd]
	I0526 21:23:12.469165  527485 command_runner.go:124] >       snapshotter = "overlayfs"
	I0526 21:23:12.469174  527485 command_runner.go:124] >       [plugins.cri.containerd.default_runtime]
	I0526 21:23:12.469181  527485 command_runner.go:124] >         runtime_type = "io.containerd.runc.v2"
	I0526 21:23:12.469193  527485 command_runner.go:124] >         [plugins.cri.containerd.default_runtime.options]
	I0526 21:23:12.469201  527485 command_runner.go:124] >           NoPivotRoot = true
	I0526 21:23:12.469209  527485 command_runner.go:124] >       [plugins.cri.containerd.untrusted_workload_runtime]
	I0526 21:23:12.469219  527485 command_runner.go:124] >         runtime_type = ""
	I0526 21:23:12.469225  527485 command_runner.go:124] >         runtime_engine = ""
	I0526 21:23:12.469238  527485 command_runner.go:124] >         runtime_root = ""
	I0526 21:23:12.469244  527485 command_runner.go:124] >     [plugins.cri.cni]
	I0526 21:23:12.469252  527485 command_runner.go:124] >       bin_dir = "/opt/cni/bin"
	I0526 21:23:12.469259  527485 command_runner.go:124] >       conf_dir = "/etc/cni/net.mk"
	I0526 21:23:12.469268  527485 command_runner.go:124] >       conf_template = ""
	I0526 21:23:12.469275  527485 command_runner.go:124] >     [plugins.cri.registry]
	I0526 21:23:12.469293  527485 command_runner.go:124] >       [plugins.cri.registry.mirrors]
	I0526 21:23:12.469305  527485 command_runner.go:124] >         [plugins.cri.registry.mirrors."docker.io"]
	I0526 21:23:12.469314  527485 command_runner.go:124] >           endpoint = ["https://registry-1.docker.io"]
	I0526 21:23:12.469322  527485 command_runner.go:124] >         [plugins.diff-service]
	I0526 21:23:12.469329  527485 command_runner.go:124] >     default = ["walking"]
	I0526 21:23:12.469336  527485 command_runner.go:124] >   [plugins.scheduler]
	I0526 21:23:12.469343  527485 command_runner.go:124] >     pause_threshold = 0.02
	I0526 21:23:12.469350  527485 command_runner.go:124] >     deletion_threshold = 0
	I0526 21:23:12.469356  527485 command_runner.go:124] >     mutation_threshold = 100
	I0526 21:23:12.469365  527485 command_runner.go:124] >     schedule_delay = "0s"
	I0526 21:23:12.469372  527485 command_runner.go:124] >     startup_delay = "100ms"
	I0526 21:23:12.469420  527485 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0526 21:23:12.478525  527485 command_runner.go:124] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0526 21:23:12.478613  527485 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0526 21:23:12.478664  527485 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0526 21:23:12.492700  527485 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0526 21:23:12.499064  527485 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0526 21:23:12.612771  527485 ssh_runner.go:149] Run: sudo systemctl restart containerd
	I0526 21:23:16.654015  527485 ssh_runner.go:189] Completed: sudo systemctl restart containerd: (4.041197168s)
	I0526 21:23:16.654058  527485 start.go:376] Will wait 60s for socket path /run/containerd/containerd.sock
	I0526 21:23:16.654117  527485 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0526 21:23:16.662580  527485 command_runner.go:124] ! stat: cannot stat '/run/containerd/containerd.sock': No such file or directory
	I0526 21:23:16.662621  527485 retry.go:31] will retry after 1.104660288s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/run/containerd/containerd.sock': No such file or directory
	I0526 21:23:17.767943  527485 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0526 21:23:17.773490  527485 command_runner.go:124] >   File: /run/containerd/containerd.sock
	I0526 21:23:17.773516  527485 command_runner.go:124] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0526 21:23:17.773524  527485 command_runner.go:124] > Device: 14h/20d	Inode: 29618       Links: 1
	I0526 21:23:17.773532  527485 command_runner.go:124] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0526 21:23:17.773538  527485 command_runner.go:124] > Access: 2021-05-26 21:23:16.714470320 +0000
	I0526 21:23:17.773543  527485 command_runner.go:124] > Modify: 2021-05-26 21:23:16.714470320 +0000
	I0526 21:23:17.773549  527485 command_runner.go:124] > Change: 2021-05-26 21:23:16.714470320 +0000
	I0526 21:23:17.773553  527485 command_runner.go:124] >  Birth: -
	I0526 21:23:17.773923  527485 start.go:401] Will wait 60s for crictl version
	I0526 21:23:17.773983  527485 ssh_runner.go:149] Run: sudo crictl version
	I0526 21:23:17.795074  527485 command_runner.go:124] > Version:  0.1.0
	I0526 21:23:17.795168  527485 command_runner.go:124] > RuntimeName:  containerd
	I0526 21:23:17.795509  527485 command_runner.go:124] > RuntimeVersion:  v1.4.4
	I0526 21:23:17.795668  527485 command_runner.go:124] > RuntimeApiVersion:  v1alpha2
	I0526 21:23:17.797077  527485 start.go:410] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.4.4
	RuntimeApiVersion:  v1alpha2
	I0526 21:23:17.797129  527485 ssh_runner.go:149] Run: containerd --version
	I0526 21:23:17.825318  527485 command_runner.go:124] > containerd github.com/containerd/containerd v1.4.4 05f951a3781f4f2c1911b05e61c160e9c30eaa8e
	I0526 21:23:17.827109  527485 out.go:170] * Preparing Kubernetes v1.20.2 on containerd 1.4.4 ...
	I0526 21:23:17.827153  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetIP
	I0526 21:23:17.832341  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | domain multinode-20210526212238-510955 has defined MAC address 52:54:00:0c:8b:34 in network mk-multinode-20210526212238-510955
	I0526 21:23:17.832680  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:8b:34", ip: ""} in network mk-multinode-20210526212238-510955: {Iface:virbr1 ExpiryTime:2021-05-26 22:22:53 +0000 UTC Type:0 Mac:52:54:00:0c:8b:34 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:multinode-20210526212238-510955 Clientid:01:52:54:00:0c:8b:34}
	I0526 21:23:17.832703  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | domain multinode-20210526212238-510955 has defined IP address 192.168.39.229 and MAC address 52:54:00:0c:8b:34 in network mk-multinode-20210526212238-510955
	I0526 21:23:17.832883  527485 ssh_runner.go:149] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0526 21:23:17.837078  527485 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0526 21:23:17.847842  527485 preload.go:98] Checking if preload exists for k8s version v1.20.2 and runtime containerd
	I0526 21:23:17.847865  527485 preload.go:106] Found local preload: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.20.2-containerd-overlay2-amd64.tar.lz4
	I0526 21:23:17.847902  527485 ssh_runner.go:149] Run: sudo crictl images --output json
	I0526 21:23:17.868882  527485 command_runner.go:124] > {
	I0526 21:23:17.868898  527485 command_runner.go:124] >   "images": [
	I0526 21:23:17.868904  527485 command_runner.go:124] >     {
	I0526 21:23:17.868922  527485 command_runner.go:124] >       "id": "sha256:6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb",
	I0526 21:23:17.868928  527485 command_runner.go:124] >       "repoTags": [
	I0526 21:23:17.868938  527485 command_runner.go:124] >         "docker.io/kindest/kindnetd:v20210326-1e038dc5"
	I0526 21:23:17.868943  527485 command_runner.go:124] >       ],
	I0526 21:23:17.868950  527485 command_runner.go:124] >       "repoDigests": [
	I0526 21:23:17.868963  527485 command_runner.go:124] >         "docker.io/kindest/kindnetd@sha256:838bc1706e38391aefaa31fd52619fe8e57ad3dfb0d0ff414d902367fcc24c3c"
	I0526 21:23:17.868977  527485 command_runner.go:124] >       ],
	I0526 21:23:17.868984  527485 command_runner.go:124] >       "size": "53960776",
	I0526 21:23:17.868990  527485 command_runner.go:124] >       "uid": null,
	I0526 21:23:17.868996  527485 command_runner.go:124] >       "username": "",
	I0526 21:23:17.869006  527485 command_runner.go:124] >       "spec": null
	I0526 21:23:17.869015  527485 command_runner.go:124] >     },
	I0526 21:23:17.869020  527485 command_runner.go:124] >     {
	I0526 21:23:17.869036  527485 command_runner.go:124] >       "id": "sha256:9a07b5b4bfac07e5cfc27f76c34516a3ad2fdfa3f683f375141fe662ef2e72db",
	I0526 21:23:17.869046  527485 command_runner.go:124] >       "repoTags": [
	I0526 21:23:17.869054  527485 command_runner.go:124] >         "docker.io/kubernetesui/dashboard:v2.1.0"
	I0526 21:23:17.869063  527485 command_runner.go:124] >       ],
	I0526 21:23:17.869069  527485 command_runner.go:124] >       "repoDigests": [
	I0526 21:23:17.869083  527485 command_runner.go:124] >         "docker.io/kubernetesui/dashboard@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6"
	I0526 21:23:17.869094  527485 command_runner.go:124] >       ],
	I0526 21:23:17.869100  527485 command_runner.go:124] >       "size": "67992170",
	I0526 21:23:17.869107  527485 command_runner.go:124] >       "uid": null,
	I0526 21:23:17.869113  527485 command_runner.go:124] >       "username": "nonroot",
	I0526 21:23:17.869121  527485 command_runner.go:124] >       "spec": null
	I0526 21:23:17.869125  527485 command_runner.go:124] >     },
	I0526 21:23:17.869133  527485 command_runner.go:124] >     {
	I0526 21:23:17.869143  527485 command_runner.go:124] >       "id": "sha256:86262685d9abb35698a4e03ed13f9ded5b97c6c85b466285e4f367e5232eeee4",
	I0526 21:23:17.869152  527485 command_runner.go:124] >       "repoTags": [
	I0526 21:23:17.869161  527485 command_runner.go:124] >         "docker.io/kubernetesui/metrics-scraper:v1.0.4"
	I0526 21:23:17.869169  527485 command_runner.go:124] >       ],
	I0526 21:23:17.869176  527485 command_runner.go:124] >       "repoDigests": [
	I0526 21:23:17.869188  527485 command_runner.go:124] >         "docker.io/kubernetesui/metrics-scraper@sha256:555981a24f184420f3be0c79d4efb6c948a85cfce84034f85a563f4151a81cbf"
	I0526 21:23:17.869195  527485 command_runner.go:124] >       ],
	I0526 21:23:17.869201  527485 command_runner.go:124] >       "size": "16020077",
	I0526 21:23:17.869211  527485 command_runner.go:124] >       "uid": null,
	I0526 21:23:17.869222  527485 command_runner.go:124] >       "username": "",
	I0526 21:23:17.869228  527485 command_runner.go:124] >       "spec": null
	I0526 21:23:17.869234  527485 command_runner.go:124] >     },
	I0526 21:23:17.869239  527485 command_runner.go:124] >     {
	I0526 21:23:17.869251  527485 command_runner.go:124] >       "id": "sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0526 21:23:17.869259  527485 command_runner.go:124] >       "repoTags": [
	I0526 21:23:17.869268  527485 command_runner.go:124] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0526 21:23:17.869277  527485 command_runner.go:124] >       ],
	I0526 21:23:17.869284  527485 command_runner.go:124] >       "repoDigests": [
	I0526 21:23:17.869299  527485 command_runner.go:124] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I0526 21:23:17.869317  527485 command_runner.go:124] >       ],
	I0526 21:23:17.869324  527485 command_runner.go:124] >       "size": "9058936",
	I0526 21:23:17.869331  527485 command_runner.go:124] >       "uid": null,
	I0526 21:23:17.869338  527485 command_runner.go:124] >       "username": "",
	I0526 21:23:17.869351  527485 command_runner.go:124] >       "spec": null
	I0526 21:23:17.869357  527485 command_runner.go:124] >     },
	I0526 21:23:17.869363  527485 command_runner.go:124] >     {
	I0526 21:23:17.869376  527485 command_runner.go:124] >       "id": "sha256:bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16",
	I0526 21:23:17.869384  527485 command_runner.go:124] >       "repoTags": [
	I0526 21:23:17.869391  527485 command_runner.go:124] >         "k8s.gcr.io/coredns:1.7.0"
	I0526 21:23:17.869399  527485 command_runner.go:124] >       ],
	I0526 21:23:17.869405  527485 command_runner.go:124] >       "repoDigests": [
	I0526 21:23:17.869418  527485 command_runner.go:124] >         "k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c"
	I0526 21:23:17.869424  527485 command_runner.go:124] >       ],
	I0526 21:23:17.869430  527485 command_runner.go:124] >       "size": "13982350",
	I0526 21:23:17.869437  527485 command_runner.go:124] >       "uid": null,
	I0526 21:23:17.869443  527485 command_runner.go:124] >       "username": "",
	I0526 21:23:17.869451  527485 command_runner.go:124] >       "spec": null
	I0526 21:23:17.869456  527485 command_runner.go:124] >     },
	I0526 21:23:17.869464  527485 command_runner.go:124] >     {
	I0526 21:23:17.869474  527485 command_runner.go:124] >       "id": "sha256:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934",
	I0526 21:23:17.869483  527485 command_runner.go:124] >       "repoTags": [
	I0526 21:23:17.869491  527485 command_runner.go:124] >         "k8s.gcr.io/etcd:3.4.13-0"
	I0526 21:23:17.869497  527485 command_runner.go:124] >       ],
	I0526 21:23:17.869502  527485 command_runner.go:124] >       "repoDigests": [
	I0526 21:23:17.869514  527485 command_runner.go:124] >         "k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2"
	I0526 21:23:17.869521  527485 command_runner.go:124] >       ],
	I0526 21:23:17.869527  527485 command_runner.go:124] >       "size": "86742272",
	I0526 21:23:17.869534  527485 command_runner.go:124] >       "uid": null,
	I0526 21:23:17.869540  527485 command_runner.go:124] >       "username": "",
	I0526 21:23:17.869546  527485 command_runner.go:124] >       "spec": null
	I0526 21:23:17.869553  527485 command_runner.go:124] >     },
	I0526 21:23:17.869558  527485 command_runner.go:124] >     {
	I0526 21:23:17.869569  527485 command_runner.go:124] >       "id": "sha256:a8c2fdb8bf76e3b014d14ce69a6a2d11044cb13b4ec3185015c582b8ad69a820",
	I0526 21:23:17.869581  527485 command_runner.go:124] >       "repoTags": [
	I0526 21:23:17.869589  527485 command_runner.go:124] >         "k8s.gcr.io/kube-apiserver:v1.20.2"
	I0526 21:23:17.869596  527485 command_runner.go:124] >       ],
	I0526 21:23:17.869603  527485 command_runner.go:124] >       "repoDigests": [
	I0526 21:23:17.869616  527485 command_runner.go:124] >         "k8s.gcr.io/kube-apiserver@sha256:465ba895d578fbc1c6e299e45689381fd01c54400beba9e8f1d7456077411411"
	I0526 21:23:17.869625  527485 command_runner.go:124] >       ],
	I0526 21:23:17.869631  527485 command_runner.go:124] >       "size": "30411317",
	I0526 21:23:17.869637  527485 command_runner.go:124] >       "uid": {
	I0526 21:23:17.869645  527485 command_runner.go:124] >         "value": "0"
	I0526 21:23:17.869653  527485 command_runner.go:124] >       },
	I0526 21:23:17.869660  527485 command_runner.go:124] >       "username": "",
	I0526 21:23:17.869667  527485 command_runner.go:124] >       "spec": null
	I0526 21:23:17.869673  527485 command_runner.go:124] >     },
	I0526 21:23:17.869679  527485 command_runner.go:124] >     {
	I0526 21:23:17.869690  527485 command_runner.go:124] >       "id": "sha256:a27166429d98e07152ca71420931142127609f715925b1607acee6ea6f0e3696",
	I0526 21:23:17.869697  527485 command_runner.go:124] >       "repoTags": [
	I0526 21:23:17.869707  527485 command_runner.go:124] >         "k8s.gcr.io/kube-controller-manager:v1.20.2"
	I0526 21:23:17.869712  527485 command_runner.go:124] >       ],
	I0526 21:23:17.869721  527485 command_runner.go:124] >       "repoDigests": [
	I0526 21:23:17.869735  527485 command_runner.go:124] >         "k8s.gcr.io/kube-controller-manager@sha256:842a071d4ad49b0018f7f7404ac8a4ddfc2bce2ce15b3f8131d89563fda36c9b"
	I0526 21:23:17.869744  527485 command_runner.go:124] >       ],
	I0526 21:23:17.869751  527485 command_runner.go:124] >       "size": "29362302",
	I0526 21:23:17.869759  527485 command_runner.go:124] >       "uid": {
	I0526 21:23:17.869766  527485 command_runner.go:124] >         "value": "0"
	I0526 21:23:17.869772  527485 command_runner.go:124] >       },
	I0526 21:23:17.869778  527485 command_runner.go:124] >       "username": "",
	I0526 21:23:17.869786  527485 command_runner.go:124] >       "spec": null
	I0526 21:23:17.869792  527485 command_runner.go:124] >     },
	I0526 21:23:17.869798  527485 command_runner.go:124] >     {
	I0526 21:23:17.869808  527485 command_runner.go:124] >       "id": "sha256:43154ddb57a83de3068fe603e9c7393e7d2b77cb18d9e0daf869f74b1b4079c0",
	I0526 21:23:17.869817  527485 command_runner.go:124] >       "repoTags": [
	I0526 21:23:17.869824  527485 command_runner.go:124] >         "k8s.gcr.io/kube-proxy:v1.20.2"
	I0526 21:23:17.869832  527485 command_runner.go:124] >       ],
	I0526 21:23:17.869838  527485 command_runner.go:124] >       "repoDigests": [
	I0526 21:23:17.869851  527485 command_runner.go:124] >         "k8s.gcr.io/kube-proxy@sha256:326fe8a4508a5db91cf234c4867eff5ba458bc4107c2a7e15c827a74faa19be9"
	I0526 21:23:17.869859  527485 command_runner.go:124] >       ],
	I0526 21:23:17.869866  527485 command_runner.go:124] >       "size": "49539606",
	I0526 21:23:17.869873  527485 command_runner.go:124] >       "uid": null,
	I0526 21:23:17.869879  527485 command_runner.go:124] >       "username": "",
	I0526 21:23:17.869885  527485 command_runner.go:124] >       "spec": null
	I0526 21:23:17.869892  527485 command_runner.go:124] >     },
	I0526 21:23:17.869896  527485 command_runner.go:124] >     {
	I0526 21:23:17.869907  527485 command_runner.go:124] >       "id": "sha256:ed2c44fbdd78b69a0981ab3c57ebce2798e4a4b2b5dda2fabc720f9957d4869f",
	I0526 21:23:17.869916  527485 command_runner.go:124] >       "repoTags": [
	I0526 21:23:17.869924  527485 command_runner.go:124] >         "k8s.gcr.io/kube-scheduler:v1.20.2"
	I0526 21:23:17.869930  527485 command_runner.go:124] >       ],
	I0526 21:23:17.869937  527485 command_runner.go:124] >       "repoDigests": [
	I0526 21:23:17.869949  527485 command_runner.go:124] >         "k8s.gcr.io/kube-scheduler@sha256:304b3d70497bd62498f19f82f9ef164d38948e5ae94966690abfe9d1858867e2"
	I0526 21:23:17.869958  527485 command_runner.go:124] >       ],
	I0526 21:23:17.869964  527485 command_runner.go:124] >       "size": "14012937",
	I0526 21:23:17.869971  527485 command_runner.go:124] >       "uid": {
	I0526 21:23:17.869977  527485 command_runner.go:124] >         "value": "0"
	I0526 21:23:17.869983  527485 command_runner.go:124] >       },
	I0526 21:23:17.870024  527485 command_runner.go:124] >       "username": "",
	I0526 21:23:17.870035  527485 command_runner.go:124] >       "spec": null
	I0526 21:23:17.870040  527485 command_runner.go:124] >     },
	I0526 21:23:17.870045  527485 command_runner.go:124] >     {
	I0526 21:23:17.870059  527485 command_runner.go:124] >       "id": "sha256:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c",
	I0526 21:23:17.870068  527485 command_runner.go:124] >       "repoTags": [
	I0526 21:23:17.870075  527485 command_runner.go:124] >         "k8s.gcr.io/pause:3.2"
	I0526 21:23:17.870081  527485 command_runner.go:124] >       ],
	I0526 21:23:17.870088  527485 command_runner.go:124] >       "repoDigests": [
	I0526 21:23:17.870100  527485 command_runner.go:124] >         "k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f"
	I0526 21:23:17.870109  527485 command_runner.go:124] >       ],
	I0526 21:23:17.870116  527485 command_runner.go:124] >       "size": "299513",
	I0526 21:23:17.870123  527485 command_runner.go:124] >       "uid": null,
	I0526 21:23:17.870129  527485 command_runner.go:124] >       "username": "",
	I0526 21:23:17.870136  527485 command_runner.go:124] >       "spec": null
	I0526 21:23:17.870141  527485 command_runner.go:124] >     }
	I0526 21:23:17.870146  527485 command_runner.go:124] >   ]
	I0526 21:23:17.870153  527485 command_runner.go:124] > }
	I0526 21:23:17.870585  527485 containerd.go:570] all images are preloaded for containerd runtime.
	I0526 21:23:17.870600  527485 containerd.go:474] Images already preloaded, skipping extraction
	I0526 21:23:17.870632  527485 ssh_runner.go:149] Run: sudo crictl images --output json
	I0526 21:23:17.889731  527485 command_runner.go:124] > {
	I0526 21:23:17.889744  527485 command_runner.go:124] >   "images": [
	I0526 21:23:17.889748  527485 command_runner.go:124] >     {
	I0526 21:23:17.889757  527485 command_runner.go:124] >       "id": "sha256:6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb",
	I0526 21:23:17.889762  527485 command_runner.go:124] >       "repoTags": [
	I0526 21:23:17.889775  527485 command_runner.go:124] >         "docker.io/kindest/kindnetd:v20210326-1e038dc5"
	I0526 21:23:17.889786  527485 command_runner.go:124] >       ],
	I0526 21:23:17.889797  527485 command_runner.go:124] >       "repoDigests": [
	I0526 21:23:17.889808  527485 command_runner.go:124] >         "docker.io/kindest/kindnetd@sha256:838bc1706e38391aefaa31fd52619fe8e57ad3dfb0d0ff414d902367fcc24c3c"
	I0526 21:23:17.889813  527485 command_runner.go:124] >       ],
	I0526 21:23:17.889818  527485 command_runner.go:124] >       "size": "53960776",
	I0526 21:23:17.889822  527485 command_runner.go:124] >       "uid": null,
	I0526 21:23:17.889826  527485 command_runner.go:124] >       "username": "",
	I0526 21:23:17.889830  527485 command_runner.go:124] >       "spec": null
	I0526 21:23:17.889833  527485 command_runner.go:124] >     },
	I0526 21:23:17.889837  527485 command_runner.go:124] >     {
	I0526 21:23:17.889846  527485 command_runner.go:124] >       "id": "sha256:9a07b5b4bfac07e5cfc27f76c34516a3ad2fdfa3f683f375141fe662ef2e72db",
	I0526 21:23:17.889861  527485 command_runner.go:124] >       "repoTags": [
	I0526 21:23:17.889869  527485 command_runner.go:124] >         "docker.io/kubernetesui/dashboard:v2.1.0"
	I0526 21:23:17.889876  527485 command_runner.go:124] >       ],
	I0526 21:23:17.889884  527485 command_runner.go:124] >       "repoDigests": [
	I0526 21:23:17.889897  527485 command_runner.go:124] >         "docker.io/kubernetesui/dashboard@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6"
	I0526 21:23:17.889902  527485 command_runner.go:124] >       ],
	I0526 21:23:17.889906  527485 command_runner.go:124] >       "size": "67992170",
	I0526 21:23:17.889910  527485 command_runner.go:124] >       "uid": null,
	I0526 21:23:17.889915  527485 command_runner.go:124] >       "username": "nonroot",
	I0526 21:23:17.889919  527485 command_runner.go:124] >       "spec": null
	I0526 21:23:17.889922  527485 command_runner.go:124] >     },
	I0526 21:23:17.889925  527485 command_runner.go:124] >     {
	I0526 21:23:17.889932  527485 command_runner.go:124] >       "id": "sha256:86262685d9abb35698a4e03ed13f9ded5b97c6c85b466285e4f367e5232eeee4",
	I0526 21:23:17.889937  527485 command_runner.go:124] >       "repoTags": [
	I0526 21:23:17.889945  527485 command_runner.go:124] >         "docker.io/kubernetesui/metrics-scraper:v1.0.4"
	I0526 21:23:17.889952  527485 command_runner.go:124] >       ],
	I0526 21:23:17.889964  527485 command_runner.go:124] >       "repoDigests": [
	I0526 21:23:17.889980  527485 command_runner.go:124] >         "docker.io/kubernetesui/metrics-scraper@sha256:555981a24f184420f3be0c79d4efb6c948a85cfce84034f85a563f4151a81cbf"
	I0526 21:23:17.889987  527485 command_runner.go:124] >       ],
	I0526 21:23:17.889994  527485 command_runner.go:124] >       "size": "16020077",
	I0526 21:23:17.890001  527485 command_runner.go:124] >       "uid": null,
	I0526 21:23:17.890006  527485 command_runner.go:124] >       "username": "",
	I0526 21:23:17.890010  527485 command_runner.go:124] >       "spec": null
	I0526 21:23:17.890013  527485 command_runner.go:124] >     },
	I0526 21:23:17.890017  527485 command_runner.go:124] >     {
	I0526 21:23:17.890023  527485 command_runner.go:124] >       "id": "sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0526 21:23:17.890030  527485 command_runner.go:124] >       "repoTags": [
	I0526 21:23:17.890038  527485 command_runner.go:124] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0526 21:23:17.890047  527485 command_runner.go:124] >       ],
	I0526 21:23:17.890055  527485 command_runner.go:124] >       "repoDigests": [
	I0526 21:23:17.890069  527485 command_runner.go:124] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I0526 21:23:17.890075  527485 command_runner.go:124] >       ],
	I0526 21:23:17.890082  527485 command_runner.go:124] >       "size": "9058936",
	I0526 21:23:17.890090  527485 command_runner.go:124] >       "uid": null,
	I0526 21:23:17.890096  527485 command_runner.go:124] >       "username": "",
	I0526 21:23:17.890102  527485 command_runner.go:124] >       "spec": null
	I0526 21:23:17.890105  527485 command_runner.go:124] >     },
	I0526 21:23:17.890109  527485 command_runner.go:124] >     {
	I0526 21:23:17.890119  527485 command_runner.go:124] >       "id": "sha256:bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16",
	I0526 21:23:17.890128  527485 command_runner.go:124] >       "repoTags": [
	I0526 21:23:17.890136  527485 command_runner.go:124] >         "k8s.gcr.io/coredns:1.7.0"
	I0526 21:23:17.890142  527485 command_runner.go:124] >       ],
	I0526 21:23:17.890148  527485 command_runner.go:124] >       "repoDigests": [
	I0526 21:23:17.890161  527485 command_runner.go:124] >         "k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c"
	I0526 21:23:17.890169  527485 command_runner.go:124] >       ],
	I0526 21:23:17.890175  527485 command_runner.go:124] >       "size": "13982350",
	I0526 21:23:17.890182  527485 command_runner.go:124] >       "uid": null,
	I0526 21:23:17.890187  527485 command_runner.go:124] >       "username": "",
	I0526 21:23:17.890191  527485 command_runner.go:124] >       "spec": null
	I0526 21:23:17.890195  527485 command_runner.go:124] >     },
	I0526 21:23:17.890205  527485 command_runner.go:124] >     {
	I0526 21:23:17.890219  527485 command_runner.go:124] >       "id": "sha256:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934",
	I0526 21:23:17.890226  527485 command_runner.go:124] >       "repoTags": [
	I0526 21:23:17.890233  527485 command_runner.go:124] >         "k8s.gcr.io/etcd:3.4.13-0"
	I0526 21:23:17.890239  527485 command_runner.go:124] >       ],
	I0526 21:23:17.890247  527485 command_runner.go:124] >       "repoDigests": [
	I0526 21:23:17.890258  527485 command_runner.go:124] >         "k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2"
	I0526 21:23:17.890266  527485 command_runner.go:124] >       ],
	I0526 21:23:17.890272  527485 command_runner.go:124] >       "size": "86742272",
	I0526 21:23:17.890276  527485 command_runner.go:124] >       "uid": null,
	I0526 21:23:17.890280  527485 command_runner.go:124] >       "username": "",
	I0526 21:23:17.890285  527485 command_runner.go:124] >       "spec": null
	I0526 21:23:17.890293  527485 command_runner.go:124] >     },
	I0526 21:23:17.890298  527485 command_runner.go:124] >     {
	I0526 21:23:17.890308  527485 command_runner.go:124] >       "id": "sha256:a8c2fdb8bf76e3b014d14ce69a6a2d11044cb13b4ec3185015c582b8ad69a820",
	I0526 21:23:17.890318  527485 command_runner.go:124] >       "repoTags": [
	I0526 21:23:17.890326  527485 command_runner.go:124] >         "k8s.gcr.io/kube-apiserver:v1.20.2"
	I0526 21:23:17.890331  527485 command_runner.go:124] >       ],
	I0526 21:23:17.890340  527485 command_runner.go:124] >       "repoDigests": [
	I0526 21:23:17.890353  527485 command_runner.go:124] >         "k8s.gcr.io/kube-apiserver@sha256:465ba895d578fbc1c6e299e45689381fd01c54400beba9e8f1d7456077411411"
	I0526 21:23:17.890358  527485 command_runner.go:124] >       ],
	I0526 21:23:17.890362  527485 command_runner.go:124] >       "size": "30411317",
	I0526 21:23:17.890366  527485 command_runner.go:124] >       "uid": {
	I0526 21:23:17.890372  527485 command_runner.go:124] >         "value": "0"
	I0526 21:23:17.890379  527485 command_runner.go:124] >       },
	I0526 21:23:17.890386  527485 command_runner.go:124] >       "username": "",
	I0526 21:23:17.890394  527485 command_runner.go:124] >       "spec": null
	I0526 21:23:17.890399  527485 command_runner.go:124] >     },
	I0526 21:23:17.890404  527485 command_runner.go:124] >     {
	I0526 21:23:17.890415  527485 command_runner.go:124] >       "id": "sha256:a27166429d98e07152ca71420931142127609f715925b1607acee6ea6f0e3696",
	I0526 21:23:17.890425  527485 command_runner.go:124] >       "repoTags": [
	I0526 21:23:17.890433  527485 command_runner.go:124] >         "k8s.gcr.io/kube-controller-manager:v1.20.2"
	I0526 21:23:17.890441  527485 command_runner.go:124] >       ],
	I0526 21:23:17.890445  527485 command_runner.go:124] >       "repoDigests": [
	I0526 21:23:17.890455  527485 command_runner.go:124] >         "k8s.gcr.io/kube-controller-manager@sha256:842a071d4ad49b0018f7f7404ac8a4ddfc2bce2ce15b3f8131d89563fda36c9b"
	I0526 21:23:17.890463  527485 command_runner.go:124] >       ],
	I0526 21:23:17.890470  527485 command_runner.go:124] >       "size": "29362302",
	I0526 21:23:17.890477  527485 command_runner.go:124] >       "uid": {
	I0526 21:23:17.890483  527485 command_runner.go:124] >         "value": "0"
	I0526 21:23:17.890490  527485 command_runner.go:124] >       },
	I0526 21:23:17.890496  527485 command_runner.go:124] >       "username": "",
	I0526 21:23:17.890504  527485 command_runner.go:124] >       "spec": null
	I0526 21:23:17.890509  527485 command_runner.go:124] >     },
	I0526 21:23:17.890514  527485 command_runner.go:124] >     {
	I0526 21:23:17.890524  527485 command_runner.go:124] >       "id": "sha256:43154ddb57a83de3068fe603e9c7393e7d2b77cb18d9e0daf869f74b1b4079c0",
	I0526 21:23:17.890533  527485 command_runner.go:124] >       "repoTags": [
	I0526 21:23:17.890540  527485 command_runner.go:124] >         "k8s.gcr.io/kube-proxy:v1.20.2"
	I0526 21:23:17.890551  527485 command_runner.go:124] >       ],
	I0526 21:23:17.890557  527485 command_runner.go:124] >       "repoDigests": [
	I0526 21:23:17.890571  527485 command_runner.go:124] >         "k8s.gcr.io/kube-proxy@sha256:326fe8a4508a5db91cf234c4867eff5ba458bc4107c2a7e15c827a74faa19be9"
	I0526 21:23:17.890576  527485 command_runner.go:124] >       ],
	I0526 21:23:17.890585  527485 command_runner.go:124] >       "size": "49539606",
	I0526 21:23:17.890591  527485 command_runner.go:124] >       "uid": null,
	I0526 21:23:17.890599  527485 command_runner.go:124] >       "username": "",
	I0526 21:23:17.890605  527485 command_runner.go:124] >       "spec": null
	I0526 21:23:17.890611  527485 command_runner.go:124] >     },
	I0526 21:23:17.890615  527485 command_runner.go:124] >     {
	I0526 21:23:17.890624  527485 command_runner.go:124] >       "id": "sha256:ed2c44fbdd78b69a0981ab3c57ebce2798e4a4b2b5dda2fabc720f9957d4869f",
	I0526 21:23:17.890633  527485 command_runner.go:124] >       "repoTags": [
	I0526 21:23:17.890641  527485 command_runner.go:124] >         "k8s.gcr.io/kube-scheduler:v1.20.2"
	I0526 21:23:17.890648  527485 command_runner.go:124] >       ],
	I0526 21:23:17.890654  527485 command_runner.go:124] >       "repoDigests": [
	I0526 21:23:17.890668  527485 command_runner.go:124] >         "k8s.gcr.io/kube-scheduler@sha256:304b3d70497bd62498f19f82f9ef164d38948e5ae94966690abfe9d1858867e2"
	I0526 21:23:17.890675  527485 command_runner.go:124] >       ],
	I0526 21:23:17.890685  527485 command_runner.go:124] >       "size": "14012937",
	I0526 21:23:17.890694  527485 command_runner.go:124] >       "uid": {
	I0526 21:23:17.890701  527485 command_runner.go:124] >         "value": "0"
	I0526 21:23:17.890705  527485 command_runner.go:124] >       },
	I0526 21:23:17.890735  527485 command_runner.go:124] >       "username": "",
	I0526 21:23:17.890745  527485 command_runner.go:124] >       "spec": null
	I0526 21:23:17.890750  527485 command_runner.go:124] >     },
	I0526 21:23:17.890756  527485 command_runner.go:124] >     {
	I0526 21:23:17.890770  527485 command_runner.go:124] >       "id": "sha256:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c",
	I0526 21:23:17.890775  527485 command_runner.go:124] >       "repoTags": [
	I0526 21:23:17.890787  527485 command_runner.go:124] >         "k8s.gcr.io/pause:3.2"
	I0526 21:23:17.890795  527485 command_runner.go:124] >       ],
	I0526 21:23:17.890802  527485 command_runner.go:124] >       "repoDigests": [
	I0526 21:23:17.890816  527485 command_runner.go:124] >         "k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f"
	I0526 21:23:17.890823  527485 command_runner.go:124] >       ],
	I0526 21:23:17.890829  527485 command_runner.go:124] >       "size": "299513",
	I0526 21:23:17.890837  527485 command_runner.go:124] >       "uid": null,
	I0526 21:23:17.890843  527485 command_runner.go:124] >       "username": "",
	I0526 21:23:17.890850  527485 command_runner.go:124] >       "spec": null
	I0526 21:23:17.890856  527485 command_runner.go:124] >     }
	I0526 21:23:17.890863  527485 command_runner.go:124] >   ]
	I0526 21:23:17.890868  527485 command_runner.go:124] > }
	I0526 21:23:17.891012  527485 containerd.go:570] all images are preloaded for containerd runtime.
	I0526 21:23:17.891026  527485 cache_images.go:74] Images are preloaded, skipping loading
	I0526 21:23:17.891071  527485 ssh_runner.go:149] Run: sudo crictl info
	I0526 21:23:17.909686  527485 command_runner.go:124] > {
	I0526 21:23:17.909698  527485 command_runner.go:124] >   "status": {
	I0526 21:23:17.909702  527485 command_runner.go:124] >     "conditions": [
	I0526 21:23:17.909706  527485 command_runner.go:124] >       {
	I0526 21:23:17.909711  527485 command_runner.go:124] >         "type": "RuntimeReady",
	I0526 21:23:17.909715  527485 command_runner.go:124] >         "status": true,
	I0526 21:23:17.909719  527485 command_runner.go:124] >         "reason": "",
	I0526 21:23:17.909724  527485 command_runner.go:124] >         "message": ""
	I0526 21:23:17.909727  527485 command_runner.go:124] >       },
	I0526 21:23:17.909734  527485 command_runner.go:124] >       {
	I0526 21:23:17.909739  527485 command_runner.go:124] >         "type": "NetworkReady",
	I0526 21:23:17.909743  527485 command_runner.go:124] >         "status": false,
	I0526 21:23:17.909749  527485 command_runner.go:124] >         "reason": "NetworkPluginNotReady",
	I0526 21:23:17.909756  527485 command_runner.go:124] >         "message": "Network plugin returns error: cni plugin not initialized"
	I0526 21:23:17.909776  527485 command_runner.go:124] >       }
	I0526 21:23:17.909789  527485 command_runner.go:124] >     ]
	I0526 21:23:17.909796  527485 command_runner.go:124] >   },
	I0526 21:23:17.909799  527485 command_runner.go:124] >   "cniconfig": {
	I0526 21:23:17.909803  527485 command_runner.go:124] >     "PluginDirs": [
	I0526 21:23:17.909808  527485 command_runner.go:124] >       "/opt/cni/bin"
	I0526 21:23:17.909812  527485 command_runner.go:124] >     ],
	I0526 21:23:17.909817  527485 command_runner.go:124] >     "PluginConfDir": "/etc/cni/net.mk",
	I0526 21:23:17.909821  527485 command_runner.go:124] >     "PluginMaxConfNum": 1,
	I0526 21:23:17.909825  527485 command_runner.go:124] >     "Prefix": "eth",
	I0526 21:23:17.909829  527485 command_runner.go:124] >     "Networks": [
	I0526 21:23:17.909833  527485 command_runner.go:124] >       {
	I0526 21:23:17.909837  527485 command_runner.go:124] >         "Config": {
	I0526 21:23:17.909841  527485 command_runner.go:124] >           "Name": "cni-loopback",
	I0526 21:23:17.909846  527485 command_runner.go:124] >           "CNIVersion": "0.3.1",
	I0526 21:23:17.909850  527485 command_runner.go:124] >           "Plugins": [
	I0526 21:23:17.909854  527485 command_runner.go:124] >             {
	I0526 21:23:17.909858  527485 command_runner.go:124] >               "Network": {
	I0526 21:23:17.909863  527485 command_runner.go:124] >                 "type": "loopback",
	I0526 21:23:17.909869  527485 command_runner.go:124] >                 "ipam": {},
	I0526 21:23:17.909873  527485 command_runner.go:124] >                 "dns": {}
	I0526 21:23:17.909879  527485 command_runner.go:124] >               },
	I0526 21:23:17.909885  527485 command_runner.go:124] >               "Source": "{\"type\":\"loopback\"}"
	I0526 21:23:17.909890  527485 command_runner.go:124] >             }
	I0526 21:23:17.909893  527485 command_runner.go:124] >           ],
	I0526 21:23:17.909902  527485 command_runner.go:124] >           "Source": "{\n\"cniVersion\": \"0.3.1\",\n\"name\": \"cni-loopback\",\n\"plugins\": [{\n  \"type\": \"loopback\"\n}]\n}"
	I0526 21:23:17.909908  527485 command_runner.go:124] >         },
	I0526 21:23:17.909912  527485 command_runner.go:124] >         "IFName": "lo"
	I0526 21:23:17.909917  527485 command_runner.go:124] >       }
	I0526 21:23:17.909921  527485 command_runner.go:124] >     ]
	I0526 21:23:17.909924  527485 command_runner.go:124] >   },
	I0526 21:23:17.909928  527485 command_runner.go:124] >   "config": {
	I0526 21:23:17.909935  527485 command_runner.go:124] >     "containerd": {
	I0526 21:23:17.909941  527485 command_runner.go:124] >       "snapshotter": "overlayfs",
	I0526 21:23:17.909949  527485 command_runner.go:124] >       "defaultRuntimeName": "default",
	I0526 21:23:17.909955  527485 command_runner.go:124] >       "defaultRuntime": {
	I0526 21:23:17.909960  527485 command_runner.go:124] >         "runtimeType": "io.containerd.runc.v2",
	I0526 21:23:17.909965  527485 command_runner.go:124] >         "runtimeEngine": "",
	I0526 21:23:17.909970  527485 command_runner.go:124] >         "PodAnnotations": null,
	I0526 21:23:17.909976  527485 command_runner.go:124] >         "ContainerAnnotations": null,
	I0526 21:23:17.909985  527485 command_runner.go:124] >         "runtimeRoot": "",
	I0526 21:23:17.909992  527485 command_runner.go:124] >         "options": {},
	I0526 21:23:17.909998  527485 command_runner.go:124] >         "privileged_without_host_devices": false,
	I0526 21:23:17.910002  527485 command_runner.go:124] >         "baseRuntimeSpec": ""
	I0526 21:23:17.910005  527485 command_runner.go:124] >       },
	I0526 21:23:17.910010  527485 command_runner.go:124] >       "untrustedWorkloadRuntime": {
	I0526 21:23:17.910015  527485 command_runner.go:124] >         "runtimeType": "",
	I0526 21:23:17.910020  527485 command_runner.go:124] >         "runtimeEngine": "",
	I0526 21:23:17.910026  527485 command_runner.go:124] >         "PodAnnotations": null,
	I0526 21:23:17.910031  527485 command_runner.go:124] >         "ContainerAnnotations": null,
	I0526 21:23:17.910037  527485 command_runner.go:124] >         "runtimeRoot": "",
	I0526 21:23:17.910041  527485 command_runner.go:124] >         "options": null,
	I0526 21:23:17.910046  527485 command_runner.go:124] >         "privileged_without_host_devices": false,
	I0526 21:23:17.910053  527485 command_runner.go:124] >         "baseRuntimeSpec": ""
	I0526 21:23:17.910056  527485 command_runner.go:124] >       },
	I0526 21:23:17.910060  527485 command_runner.go:124] >       "runtimes": {
	I0526 21:23:17.910064  527485 command_runner.go:124] >         "default": {
	I0526 21:23:17.910070  527485 command_runner.go:124] >           "runtimeType": "io.containerd.runc.v2",
	I0526 21:23:17.910075  527485 command_runner.go:124] >           "runtimeEngine": "",
	I0526 21:23:17.910080  527485 command_runner.go:124] >           "PodAnnotations": null,
	I0526 21:23:17.910087  527485 command_runner.go:124] >           "ContainerAnnotations": null,
	I0526 21:23:17.910092  527485 command_runner.go:124] >           "runtimeRoot": "",
	I0526 21:23:17.910098  527485 command_runner.go:124] >           "options": {},
	I0526 21:23:17.910108  527485 command_runner.go:124] >           "privileged_without_host_devices": false,
	I0526 21:23:17.910115  527485 command_runner.go:124] >           "baseRuntimeSpec": ""
	I0526 21:23:17.910118  527485 command_runner.go:124] >         },
	I0526 21:23:17.910122  527485 command_runner.go:124] >         "runc": {
	I0526 21:23:17.910127  527485 command_runner.go:124] >           "runtimeType": "io.containerd.runc.v2",
	I0526 21:23:17.910133  527485 command_runner.go:124] >           "runtimeEngine": "",
	I0526 21:23:17.910137  527485 command_runner.go:124] >           "PodAnnotations": null,
	I0526 21:23:17.910146  527485 command_runner.go:124] >           "ContainerAnnotations": null,
	I0526 21:23:17.910151  527485 command_runner.go:124] >           "runtimeRoot": "",
	I0526 21:23:17.910155  527485 command_runner.go:124] >           "options": {},
	I0526 21:23:17.910162  527485 command_runner.go:124] >           "privileged_without_host_devices": false,
	I0526 21:23:17.910169  527485 command_runner.go:124] >           "baseRuntimeSpec": ""
	I0526 21:23:17.910172  527485 command_runner.go:124] >         }
	I0526 21:23:17.910176  527485 command_runner.go:124] >       },
	I0526 21:23:17.910180  527485 command_runner.go:124] >       "noPivot": false,
	I0526 21:23:17.910185  527485 command_runner.go:124] >       "disableSnapshotAnnotations": true,
	I0526 21:23:17.910189  527485 command_runner.go:124] >       "discardUnpackedLayers": false
	I0526 21:23:17.910193  527485 command_runner.go:124] >     },
	I0526 21:23:17.910196  527485 command_runner.go:124] >     "cni": {
	I0526 21:23:17.910200  527485 command_runner.go:124] >       "binDir": "/opt/cni/bin",
	I0526 21:23:17.910205  527485 command_runner.go:124] >       "confDir": "/etc/cni/net.mk",
	I0526 21:23:17.910209  527485 command_runner.go:124] >       "maxConfNum": 1,
	I0526 21:23:17.910213  527485 command_runner.go:124] >       "confTemplate": ""
	I0526 21:23:17.910216  527485 command_runner.go:124] >     },
	I0526 21:23:17.910220  527485 command_runner.go:124] >     "registry": {
	I0526 21:23:17.910224  527485 command_runner.go:124] >       "mirrors": {
	I0526 21:23:17.910228  527485 command_runner.go:124] >         "docker.io": {
	I0526 21:23:17.910232  527485 command_runner.go:124] >           "endpoint": [
	I0526 21:23:17.910237  527485 command_runner.go:124] >             "https://registry-1.docker.io"
	I0526 21:23:17.910240  527485 command_runner.go:124] >           ]
	I0526 21:23:17.910244  527485 command_runner.go:124] >         }
	I0526 21:23:17.910247  527485 command_runner.go:124] >       },
	I0526 21:23:17.910251  527485 command_runner.go:124] >       "configs": null,
	I0526 21:23:17.910255  527485 command_runner.go:124] >       "auths": null,
	I0526 21:23:17.910259  527485 command_runner.go:124] >       "headers": null
	I0526 21:23:17.910262  527485 command_runner.go:124] >     },
	I0526 21:23:17.910266  527485 command_runner.go:124] >     "imageDecryption": {
	I0526 21:23:17.910270  527485 command_runner.go:124] >       "keyModel": ""
	I0526 21:23:17.910273  527485 command_runner.go:124] >     },
	I0526 21:23:17.910277  527485 command_runner.go:124] >     "disableTCPService": true,
	I0526 21:23:17.910283  527485 command_runner.go:124] >     "streamServerAddress": "",
	I0526 21:23:17.910287  527485 command_runner.go:124] >     "streamServerPort": "10010",
	I0526 21:23:17.910295  527485 command_runner.go:124] >     "streamIdleTimeout": "4h0m0s",
	I0526 21:23:17.910299  527485 command_runner.go:124] >     "enableSelinux": false,
	I0526 21:23:17.910304  527485 command_runner.go:124] >     "selinuxCategoryRange": 1024,
	I0526 21:23:17.910309  527485 command_runner.go:124] >     "sandboxImage": "k8s.gcr.io/pause:3.2",
	I0526 21:23:17.910313  527485 command_runner.go:124] >     "statsCollectPeriod": 10,
	I0526 21:23:17.910317  527485 command_runner.go:124] >     "systemdCgroup": false,
	I0526 21:23:17.910323  527485 command_runner.go:124] >     "enableTLSStreaming": false,
	I0526 21:23:17.910329  527485 command_runner.go:124] >     "x509KeyPairStreaming": {
	I0526 21:23:17.910333  527485 command_runner.go:124] >       "tlsCertFile": "",
	I0526 21:23:17.910337  527485 command_runner.go:124] >       "tlsKeyFile": ""
	I0526 21:23:17.910340  527485 command_runner.go:124] >     },
	I0526 21:23:17.910345  527485 command_runner.go:124] >     "maxContainerLogSize": 16384,
	I0526 21:23:17.910349  527485 command_runner.go:124] >     "disableCgroup": false,
	I0526 21:23:17.910353  527485 command_runner.go:124] >     "disableApparmor": false,
	I0526 21:23:17.910357  527485 command_runner.go:124] >     "restrictOOMScoreAdj": false,
	I0526 21:23:17.910362  527485 command_runner.go:124] >     "maxConcurrentDownloads": 3,
	I0526 21:23:17.910366  527485 command_runner.go:124] >     "disableProcMount": false,
	I0526 21:23:17.910370  527485 command_runner.go:124] >     "unsetSeccompProfile": "",
	I0526 21:23:17.910375  527485 command_runner.go:124] >     "tolerateMissingHugetlbController": true,
	I0526 21:23:17.910381  527485 command_runner.go:124] >     "disableHugetlbController": true,
	I0526 21:23:17.910388  527485 command_runner.go:124] >     "ignoreImageDefinedVolumes": false,
	I0526 21:23:17.910398  527485 command_runner.go:124] >     "containerdRootDir": "/mnt/vda1/var/lib/containerd",
	I0526 21:23:17.910404  527485 command_runner.go:124] >     "containerdEndpoint": "/run/containerd/containerd.sock",
	I0526 21:23:17.910412  527485 command_runner.go:124] >     "rootDir": "/mnt/vda1/var/lib/containerd/io.containerd.grpc.v1.cri",
	I0526 21:23:17.910417  527485 command_runner.go:124] >     "stateDir": "/run/containerd/io.containerd.grpc.v1.cri"
	I0526 21:23:17.910420  527485 command_runner.go:124] >   },
	I0526 21:23:17.910425  527485 command_runner.go:124] >   "golang": "go1.13.15",
	I0526 21:23:17.910456  527485 command_runner.go:124] >   "lastCNILoadStatus": "cni config load failed: no network config found in /etc/cni/net.mk: cni plugin not initialized: failed to load cni config"
	I0526 21:23:17.910463  527485 command_runner.go:124] > }
	I0526 21:23:17.910821  527485 cni.go:93] Creating CNI manager for ""
	I0526 21:23:17.910838  527485 cni.go:154] 1 nodes found, recommending kindnet
	I0526 21:23:17.910848  527485 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0526 21:23:17.910861  527485 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.229 APIServerPort:8443 KubernetesVersion:v1.20.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-20210526212238-510955 NodeName:multinode-20210526212238-510955 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.229"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.39.229 CgroupDriver:cgroupfs C
lientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0526 21:23:17.911001  527485 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.229
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "multinode-20210526212238-510955"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.229
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.229"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	
	I0526 21:23:17.911073  527485 kubeadm.go:909] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=multinode-20210526212238-510955 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.229 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.2 ClusterName:multinode-20210526212238-510955 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0526 21:23:17.911118  527485 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.20.2
	I0526 21:23:17.918180  527485 command_runner.go:124] > kubeadm
	I0526 21:23:17.918191  527485 command_runner.go:124] > kubectl
	I0526 21:23:17.918194  527485 command_runner.go:124] > kubelet
	I0526 21:23:17.918580  527485 binaries.go:44] Found k8s binaries, skipping transfer
	I0526 21:23:17.918625  527485 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0526 21:23:17.925392  527485 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (578 bytes)
	I0526 21:23:17.937000  527485 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0526 21:23:17.948647  527485 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1903 bytes)
	I0526 21:23:17.961092  527485 ssh_runner.go:149] Run: grep 192.168.39.229	control-plane.minikube.internal$ /etc/hosts
	I0526 21:23:17.965162  527485 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.229	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0526 21:23:17.975398  527485 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/multinode-20210526212238-510955 for IP: 192.168.39.229
	I0526 21:23:17.975437  527485 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/ca.key
	I0526 21:23:17.975450  527485 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/proxy-client-ca.key
	I0526 21:23:17.975492  527485 certs.go:294] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/multinode-20210526212238-510955/client.key
	I0526 21:23:17.975501  527485 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/multinode-20210526212238-510955/client.crt with IP's: []
	I0526 21:23:18.150789  527485 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/multinode-20210526212238-510955/client.crt ...
	I0526 21:23:18.150819  527485 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/multinode-20210526212238-510955/client.crt: {Name:mka353ee94583202e0ac0ab8b589d54e00abd226 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0526 21:23:18.151035  527485 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/multinode-20210526212238-510955/client.key ...
	I0526 21:23:18.151068  527485 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/multinode-20210526212238-510955/client.key: {Name:mk56ed57fbfad1ce9204b3afb46ba92eb135d7dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0526 21:23:18.151198  527485 certs.go:294] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/multinode-20210526212238-510955/apiserver.key.24f4b2b2
	I0526 21:23:18.151213  527485 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/multinode-20210526212238-510955/apiserver.crt.24f4b2b2 with IP's: [192.168.39.229 10.96.0.1 127.0.0.1 10.0.0.1]
	I0526 21:23:18.319161  527485 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/multinode-20210526212238-510955/apiserver.crt.24f4b2b2 ...
	I0526 21:23:18.319186  527485 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/multinode-20210526212238-510955/apiserver.crt.24f4b2b2: {Name:mk60b9f5977b906dd74e7409f8fce67aafe5ae90 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0526 21:23:18.319342  527485 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/multinode-20210526212238-510955/apiserver.key.24f4b2b2 ...
	I0526 21:23:18.319355  527485 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/multinode-20210526212238-510955/apiserver.key.24f4b2b2: {Name:mkea8db32c7e69da0830c942843d97c5a8f24216 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0526 21:23:18.319431  527485 certs.go:305] copying /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/multinode-20210526212238-510955/apiserver.crt.24f4b2b2 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/multinode-20210526212238-510955/apiserver.crt
	I0526 21:23:18.319484  527485 certs.go:309] copying /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/multinode-20210526212238-510955/apiserver.key.24f4b2b2 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/multinode-20210526212238-510955/apiserver.key
	I0526 21:23:18.319536  527485 certs.go:294] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/multinode-20210526212238-510955/proxy-client.key
	I0526 21:23:18.319558  527485 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/multinode-20210526212238-510955/proxy-client.crt with IP's: []
	I0526 21:23:18.451761  527485 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/multinode-20210526212238-510955/proxy-client.crt ...
	I0526 21:23:18.451785  527485 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/multinode-20210526212238-510955/proxy-client.crt: {Name:mk351ac3702144d65129d3ce5ad96c8410dc8c78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0526 21:23:18.451911  527485 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/multinode-20210526212238-510955/proxy-client.key ...
	I0526 21:23:18.451922  527485 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/multinode-20210526212238-510955/proxy-client.key: {Name:mk1619fd287078bac03af9aec2063e21580ea46d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0526 21:23:18.452000  527485 vm_assets.go:98] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/multinode-20210526212238-510955/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0526 21:23:18.452022  527485 vm_assets.go:98] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/multinode-20210526212238-510955/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0526 21:23:18.452035  527485 vm_assets.go:98] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/multinode-20210526212238-510955/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0526 21:23:18.452045  527485 vm_assets.go:98] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/multinode-20210526212238-510955/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0526 21:23:18.452057  527485 vm_assets.go:98] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0526 21:23:18.452072  527485 vm_assets.go:98] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0526 21:23:18.452084  527485 vm_assets.go:98] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0526 21:23:18.452096  527485 vm_assets.go:98] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0526 21:23:18.452143  527485 certs.go:369] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/510955.pem (1338 bytes)
	W0526 21:23:18.452176  527485 certs.go:365] ignoring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/510955_empty.pem, impossibly tiny 0 bytes
	I0526 21:23:18.452186  527485 certs.go:369] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/ca-key.pem (1675 bytes)
	I0526 21:23:18.452207  527485 certs.go:369] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/ca.pem (1078 bytes)
	I0526 21:23:18.452229  527485 certs.go:369] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/cert.pem (1123 bytes)
	I0526 21:23:18.452252  527485 certs.go:369] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/key.pem (1679 bytes)
	I0526 21:23:18.452281  527485 vm_assets.go:98] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0526 21:23:18.452298  527485 vm_assets.go:98] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/510955.pem -> /usr/share/ca-certificates/510955.pem
	I0526 21:23:18.453162  527485 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/multinode-20210526212238-510955/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0526 21:23:18.471582  527485 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/multinode-20210526212238-510955/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0526 21:23:18.488065  527485 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/multinode-20210526212238-510955/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0526 21:23:18.504684  527485 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/multinode-20210526212238-510955/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0526 21:23:18.520451  527485 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0526 21:23:18.536643  527485 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0526 21:23:18.552821  527485 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0526 21:23:18.569056  527485 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0526 21:23:18.584573  527485 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0526 21:23:18.600174  527485 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/510955.pem --> /usr/share/ca-certificates/510955.pem (1338 bytes)
	I0526 21:23:18.615487  527485 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0526 21:23:18.626836  527485 ssh_runner.go:149] Run: openssl version
	I0526 21:23:18.632434  527485 command_runner.go:124] > OpenSSL 1.1.1k  25 Mar 2021
	I0526 21:23:18.632493  527485 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0526 21:23:18.639790  527485 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0526 21:23:18.644120  527485 command_runner.go:124] > -rw-r--r-- 1 root root 1111 May 26 20:40 /usr/share/ca-certificates/minikubeCA.pem
	I0526 21:23:18.644156  527485 certs.go:410] hashing: -rw-r--r-- 1 root root 1111 May 26 20:40 /usr/share/ca-certificates/minikubeCA.pem
	I0526 21:23:18.644186  527485 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0526 21:23:18.649647  527485 command_runner.go:124] > b5213941
	I0526 21:23:18.649727  527485 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0526 21:23:18.657004  527485 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/510955.pem && ln -fs /usr/share/ca-certificates/510955.pem /etc/ssl/certs/510955.pem"
	I0526 21:23:18.665255  527485 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/510955.pem
	I0526 21:23:18.669430  527485 command_runner.go:124] > -rw-r--r-- 1 root root 1338 May 26 21:12 /usr/share/ca-certificates/510955.pem
	I0526 21:23:18.669745  527485 certs.go:410] hashing: -rw-r--r-- 1 root root 1338 May 26 21:12 /usr/share/ca-certificates/510955.pem
	I0526 21:23:18.669782  527485 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/510955.pem
	I0526 21:23:18.675076  527485 command_runner.go:124] > 51391683
	I0526 21:23:18.675278  527485 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/510955.pem /etc/ssl/certs/51391683.0"
	I0526 21:23:18.682633  527485 kubeadm.go:390] StartCluster: {Name:multinode-20210526212238-510955 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.20.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.22-1620785771-11384@sha256:f5844fe35994179bbad8dda27d4912304a2fedccdf0bf93ce8b2ec2b3b83af1c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:multinode-202105
26212238-510955 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.229 Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:true}
	I0526 21:23:18.682698  527485 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0526 21:23:18.682731  527485 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0526 21:23:18.704402  527485 cri.go:76] found id: ""
	I0526 21:23:18.704439  527485 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0526 21:23:18.712334  527485 command_runner.go:124] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0526 21:23:18.712354  527485 command_runner.go:124] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0526 21:23:18.712361  527485 command_runner.go:124] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0526 21:23:18.712683  527485 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0526 21:23:18.719494  527485 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0526 21:23:18.726749  527485 command_runner.go:124] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0526 21:23:18.726770  527485 command_runner.go:124] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0526 21:23:18.726782  527485 command_runner.go:124] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0526 21:23:18.726800  527485 command_runner.go:124] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0526 21:23:18.727063  527485 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0526 21:23:18.727086  527485 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem"
	I0526 21:23:18.870427  527485 command_runner.go:124] > [init] Using Kubernetes version: v1.20.2
	I0526 21:23:18.870523  527485 command_runner.go:124] > [preflight] Running pre-flight checks
	I0526 21:23:19.181632  527485 command_runner.go:124] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0526 21:23:19.181802  527485 command_runner.go:124] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0526 21:23:19.181923  527485 command_runner.go:124] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0526 21:23:19.285040  527485 out.go:197]   - Generating certificates and keys ...
	I0526 21:23:19.282935  527485 command_runner.go:124] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0526 21:23:19.285188  527485 command_runner.go:124] > [certs] Using existing ca certificate authority
	I0526 21:23:19.285310  527485 command_runner.go:124] > [certs] Using existing apiserver certificate and key on disk
	I0526 21:23:19.412102  527485 command_runner.go:124] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0526 21:23:19.592359  527485 command_runner.go:124] > [certs] Generating "front-proxy-ca" certificate and key
	I0526 21:23:19.823466  527485 command_runner.go:124] > [certs] Generating "front-proxy-client" certificate and key
	I0526 21:23:20.134473  527485 command_runner.go:124] > [certs] Generating "etcd/ca" certificate and key
	I0526 21:23:20.238455  527485 command_runner.go:124] > [certs] Generating "etcd/server" certificate and key
	I0526 21:23:20.238645  527485 command_runner.go:124] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-20210526212238-510955] and IPs [192.168.39.229 127.0.0.1 ::1]
	I0526 21:23:20.610159  527485 command_runner.go:124] > [certs] Generating "etcd/peer" certificate and key
	I0526 21:23:20.610354  527485 command_runner.go:124] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-20210526212238-510955] and IPs [192.168.39.229 127.0.0.1 ::1]
	I0526 21:23:20.699903  527485 command_runner.go:124] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0526 21:23:20.838222  527485 command_runner.go:124] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0526 21:23:20.943728  527485 command_runner.go:124] > [certs] Generating "sa" key and public key
	I0526 21:23:20.943984  527485 command_runner.go:124] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0526 21:23:21.116330  527485 command_runner.go:124] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0526 21:23:21.269108  527485 command_runner.go:124] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0526 21:23:21.477568  527485 command_runner.go:124] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0526 21:23:21.664768  527485 command_runner.go:124] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0526 21:23:21.681316  527485 command_runner.go:124] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0526 21:23:21.681810  527485 command_runner.go:124] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0526 21:23:21.681889  527485 command_runner.go:124] > [kubelet-start] Starting the kubelet
	I0526 21:23:21.832843  527485 out.go:197]   - Booting up control plane ...
	I0526 21:23:21.830617  527485 command_runner.go:124] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0526 21:23:21.833002  527485 command_runner.go:124] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0526 21:23:21.835950  527485 command_runner.go:124] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0526 21:23:21.836999  527485 command_runner.go:124] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0526 21:23:21.838913  527485 command_runner.go:124] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0526 21:23:21.849864  527485 command_runner.go:124] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0526 21:23:36.852882  527485 command_runner.go:124] > [apiclient] All control plane components are healthy after 15.004823 seconds
	I0526 21:23:36.852998  527485 command_runner.go:124] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0526 21:23:36.890092  527485 command_runner.go:124] > [kubelet] Creating a ConfigMap "kubelet-config-1.20" in namespace kube-system with the configuration for the kubelets in the cluster
	I0526 21:23:37.424532  527485 command_runner.go:124] > [upload-certs] Skipping phase. Please see --upload-certs
	I0526 21:23:37.424782  527485 command_runner.go:124] > [mark-control-plane] Marking the node multinode-20210526212238-510955 as control-plane by adding the labels "node-role.kubernetes.io/master=''" and "node-role.kubernetes.io/control-plane='' (deprecated)"
	I0526 21:23:37.947722  527485 out.go:197]   - Configuring RBAC rules ...
	I0526 21:23:37.943949  527485 command_runner.go:124] > [bootstrap-token] Using token: 219e67.wy9tafwbla0sc2zj
	I0526 21:23:37.947854  527485 command_runner.go:124] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0526 21:23:37.959349  527485 command_runner.go:124] > [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0526 21:23:37.971621  527485 command_runner.go:124] > [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0526 21:23:37.976035  527485 command_runner.go:124] > [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0526 21:23:37.981660  527485 command_runner.go:124] > [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0526 21:23:37.989573  527485 command_runner.go:124] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0526 21:23:38.020400  527485 command_runner.go:124] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0526 21:23:38.407457  527485 command_runner.go:124] > [addons] Applied essential addon: CoreDNS
	I0526 21:23:38.471668  527485 command_runner.go:124] > [addons] Applied essential addon: kube-proxy
	I0526 21:23:38.473019  527485 command_runner.go:124] > Your Kubernetes control-plane has initialized successfully!
	I0526 21:23:38.473115  527485 command_runner.go:124] > To start using your cluster, you need to run the following as a regular user:
	I0526 21:23:38.473152  527485 command_runner.go:124] >   mkdir -p $HOME/.kube
	I0526 21:23:38.473219  527485 command_runner.go:124] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0526 21:23:38.473290  527485 command_runner.go:124] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0526 21:23:38.473370  527485 command_runner.go:124] > Alternatively, if you are the root user, you can run:
	I0526 21:23:38.473469  527485 command_runner.go:124] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0526 21:23:38.473557  527485 command_runner.go:124] > You should now deploy a pod network to the cluster.
	I0526 21:23:38.473656  527485 command_runner.go:124] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0526 21:23:38.473751  527485 command_runner.go:124] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0526 21:23:38.473857  527485 command_runner.go:124] > You can now join any number of control-plane nodes by copying certificate authorities
	I0526 21:23:38.473954  527485 command_runner.go:124] > and service account keys on each node and then running the following as root:
	I0526 21:23:38.474061  527485 command_runner.go:124] >   kubeadm join control-plane.minikube.internal:8443 --token 219e67.wy9tafwbla0sc2zj \
	I0526 21:23:38.474204  527485 command_runner.go:124] >     --discovery-token-ca-cert-hash sha256:12858510f46d14420576d9acdde7779529e8255fb2d74cf18105715622c3cace \
	I0526 21:23:38.474247  527485 command_runner.go:124] >     --control-plane 
	I0526 21:23:38.474373  527485 command_runner.go:124] > Then you can join any number of worker nodes by running the following on each as root:
	I0526 21:23:38.474486  527485 command_runner.go:124] > kubeadm join control-plane.minikube.internal:8443 --token 219e67.wy9tafwbla0sc2zj \
	I0526 21:23:38.474618  527485 command_runner.go:124] >     --discovery-token-ca-cert-hash sha256:12858510f46d14420576d9acdde7779529e8255fb2d74cf18105715622c3cace 
	I0526 21:23:38.475801  527485 command_runner.go:124] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0526 21:23:38.475832  527485 cni.go:93] Creating CNI manager for ""
	I0526 21:23:38.475841  527485 cni.go:154] 1 nodes found, recommending kindnet
	I0526 21:23:38.477670  527485 out.go:170] * Configuring CNI (Container Networking Interface) ...
	I0526 21:23:38.477738  527485 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0526 21:23:38.483656  527485 command_runner.go:124] >   File: /opt/cni/bin/portmap
	I0526 21:23:38.483677  527485 command_runner.go:124] >   Size: 2849304   	Blocks: 5568       IO Block: 4096   regular file
	I0526 21:23:38.483687  527485 command_runner.go:124] > Device: 10h/16d	Inode: 23213       Links: 1
	I0526 21:23:38.483697  527485 command_runner.go:124] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0526 21:23:38.483705  527485 command_runner.go:124] > Access: 2021-05-26 21:22:53.150354389 +0000
	I0526 21:23:38.483715  527485 command_runner.go:124] > Modify: 2021-05-05 21:33:55.000000000 +0000
	I0526 21:23:38.483722  527485 command_runner.go:124] > Change: 2021-05-26 21:22:48.920437741 +0000
	I0526 21:23:38.483729  527485 command_runner.go:124] >  Birth: -
	I0526 21:23:38.483805  527485 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.20.2/kubectl ...
	I0526 21:23:38.483820  527485 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0526 21:23:38.502629  527485 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0526 21:23:38.941773  527485 command_runner.go:124] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0526 21:23:38.948736  527485 command_runner.go:124] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0526 21:23:38.957671  527485 command_runner.go:124] > serviceaccount/kindnet created
	I0526 21:23:38.973203  527485 command_runner.go:124] > daemonset.apps/kindnet created
	I0526 21:23:38.975433  527485 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0526 21:23:38.975491  527485 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0526 21:23:38.975514  527485 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl label nodes minikube.k8s.io/version=v1.20.0 minikube.k8s.io/commit=1440f8d7119ca73787e7dc88324b0d13449454ff minikube.k8s.io/name=multinode-20210526212238-510955 minikube.k8s.io/updated_at=2021_05_26T21_23_38_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0526 21:23:39.150616  527485 command_runner.go:124] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0526 21:23:39.154017  527485 command_runner.go:124] > -16
	I0526 21:23:39.154057  527485 ops.go:34] apiserver oom_adj: -16
	I0526 21:23:39.154092  527485 command_runner.go:124] > node/multinode-20210526212238-510955 labeled
	I0526 21:23:39.154410  527485 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0526 21:23:39.277118  527485 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0526 21:23:39.778285  527485 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0526 21:23:39.888574  527485 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0526 21:23:40.278401  527485 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0526 21:23:40.377727  527485 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0526 21:23:40.778027  527485 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0526 21:23:40.874277  527485 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0526 21:23:41.278416  527485 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0526 21:23:41.387407  527485 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0526 21:23:41.778275  527485 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0526 21:23:41.899349  527485 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0526 21:23:42.277987  527485 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0526 21:23:42.370237  527485 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0526 21:23:42.778358  527485 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0526 21:23:42.885788  527485 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0526 21:23:43.277993  527485 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0526 21:23:43.376936  527485 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0526 21:23:43.778336  527485 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0526 21:23:43.875676  527485 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0526 21:23:44.277650  527485 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0526 21:23:44.374442  527485 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0526 21:23:44.778675  527485 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0526 21:23:44.890116  527485 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0526 21:23:45.278626  527485 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0526 21:23:45.378443  527485 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0526 21:23:45.777968  527485 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0526 21:23:45.877181  527485 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0526 21:23:46.277884  527485 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0526 21:23:46.383710  527485 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0526 21:23:46.778571  527485 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0526 21:23:46.882908  527485 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0526 21:23:47.277946  527485 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0526 21:23:47.371861  527485 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0526 21:23:47.777746  527485 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0526 21:23:47.879722  527485 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0526 21:23:48.278491  527485 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0526 21:23:48.377059  527485 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0526 21:23:48.778243  527485 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0526 21:23:48.878796  527485 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0526 21:23:49.278610  527485 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0526 21:23:49.381977  527485 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0526 21:23:49.778335  527485 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0526 21:23:49.881825  527485 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0526 21:23:50.278408  527485 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0526 21:23:50.373508  527485 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0526 21:23:50.778066  527485 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0526 21:23:50.883670  527485 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0526 21:23:51.277752  527485 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0526 21:23:51.379649  527485 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0526 21:23:51.777682  527485 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0526 21:23:51.879720  527485 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0526 21:23:52.278027  527485 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0526 21:23:52.456170  527485 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0526 21:23:52.777962  527485 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0526 21:23:52.901956  527485 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0526 21:23:53.278228  527485 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0526 21:23:53.645830  527485 command_runner.go:124] > NAME      SECRETS   AGE
	I0526 21:23:53.645853  527485 command_runner.go:124] > default   0         0s
	I0526 21:23:53.648299  527485 kubeadm.go:985] duration metric: took 14.672869734s to wait for elevateKubeSystemPrivileges.
	I0526 21:23:53.648338  527485 kubeadm.go:392] StartCluster complete in 34.965708295s
	I0526 21:23:53.648363  527485 settings.go:142] acquiring lock: {Name:mkb47980bcf6470cf1fcb3a16dfb83321726bd1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0526 21:23:53.648516  527485 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/kubeconfig
	I0526 21:23:53.650210  527485 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/kubeconfig: {Name:mk1cc7fc8b8e5fab9f3b22f1113879e2241e6726 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0526 21:23:53.651164  527485 loader.go:379] Config loaded from file:  /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/kubeconfig
	I0526 21:23:53.651914  527485 kapi.go:59] client config for multinode-20210526212238-510955: &rest.Config{Host:"https://192.168.39.229:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/multinode-20210526212238-510955/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/multinode-2
0210526212238-510955/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16ac600), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0526 21:23:53.653003  527485 cert_rotation.go:137] Starting client certificate rotation controller
	I0526 21:23:53.654302  527485 round_trippers.go:422] GET https://192.168.39.229:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0526 21:23:53.654317  527485 round_trippers.go:429] Request Headers:
	I0526 21:23:53.654322  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:23:53.654326  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:23:53.668197  527485 round_trippers.go:448] Response Status: 200 OK in 13 milliseconds
	I0526 21:23:53.668217  527485 round_trippers.go:451] Response Headers:
	I0526 21:23:53.668223  527485 round_trippers.go:454]     Content-Length: 291
	I0526 21:23:53.668227  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:23:53 GMT
	I0526 21:23:53.668230  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:23:53.668233  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:23:53.668236  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:23:53.668239  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:23:53.668259  527485 request.go:1107] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"61b51d6d-f826-4099-baa3-75992beb1d32","resourceVersion":"397","creationTimestamp":"2021-05-26T21:23:38Z"},"spec":{"replicas":2},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0526 21:23:53.668845  527485 request.go:1107] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"61b51d6d-f826-4099-baa3-75992beb1d32","resourceVersion":"397","creationTimestamp":"2021-05-26T21:23:38Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0526 21:23:53.668903  527485 round_trippers.go:422] PUT https://192.168.39.229:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0526 21:23:53.668909  527485 round_trippers.go:429] Request Headers:
	I0526 21:23:53.668913  527485 round_trippers.go:433]     Content-Type: application/json
	I0526 21:23:53.668917  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:23:53.668921  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:23:53.678955  527485 round_trippers.go:448] Response Status: 200 OK in 10 milliseconds
	I0526 21:23:53.678975  527485 round_trippers.go:451] Response Headers:
	I0526 21:23:53.678979  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:23:53 GMT
	I0526 21:23:53.678984  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:23:53.678988  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:23:53.678992  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:23:53.678997  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:23:53.679001  527485 round_trippers.go:454]     Content-Length: 291
	I0526 21:23:53.679021  527485 request.go:1107] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"61b51d6d-f826-4099-baa3-75992beb1d32","resourceVersion":"429","creationTimestamp":"2021-05-26T21:23:38Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0526 21:23:54.179482  527485 round_trippers.go:422] GET https://192.168.39.229:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0526 21:23:54.179516  527485 round_trippers.go:429] Request Headers:
	I0526 21:23:54.179522  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:23:54.179527  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:23:54.182547  527485 round_trippers.go:448] Response Status: 200 OK in 3 milliseconds
	I0526 21:23:54.182569  527485 round_trippers.go:451] Response Headers:
	I0526 21:23:54.182575  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:23:54.182582  527485 round_trippers.go:454]     Content-Length: 291
	I0526 21:23:54.182586  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:23:54 GMT
	I0526 21:23:54.182591  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:23:54.182595  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:23:54.182599  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:23:54.182625  527485 request.go:1107] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"61b51d6d-f826-4099-baa3-75992beb1d32","resourceVersion":"445","creationTimestamp":"2021-05-26T21:23:38Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0526 21:23:54.182752  527485 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "multinode-20210526212238-510955" rescaled to 1
	I0526 21:23:54.182785  527485 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0526 21:23:54.358624  527485 command_runner.go:124] > apiVersion: v1
	I0526 21:23:54.358651  527485 command_runner.go:124] > data:
	I0526 21:23:54.358658  527485 command_runner.go:124] >   Corefile: |
	I0526 21:23:54.358664  527485 command_runner.go:124] >     .:53 {
	I0526 21:23:54.358669  527485 command_runner.go:124] >         errors
	I0526 21:23:54.358679  527485 command_runner.go:124] >         health {
	I0526 21:23:54.358686  527485 command_runner.go:124] >            lameduck 5s
	I0526 21:23:54.358692  527485 command_runner.go:124] >         }
	I0526 21:23:54.358698  527485 command_runner.go:124] >         ready
	I0526 21:23:54.358709  527485 command_runner.go:124] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0526 21:23:54.358720  527485 command_runner.go:124] >            pods insecure
	I0526 21:23:54.358728  527485 command_runner.go:124] >            fallthrough in-addr.arpa ip6.arpa
	I0526 21:23:54.358739  527485 command_runner.go:124] >            ttl 30
	I0526 21:23:54.358746  527485 command_runner.go:124] >         }
	I0526 21:23:54.358753  527485 command_runner.go:124] >         prometheus :9153
	I0526 21:23:54.358763  527485 command_runner.go:124] >         forward . /etc/resolv.conf {
	I0526 21:23:54.358771  527485 command_runner.go:124] >            max_concurrent 1000
	I0526 21:23:54.358780  527485 command_runner.go:124] >         }
	I0526 21:23:54.358786  527485 command_runner.go:124] >         cache 30
	I0526 21:23:54.358795  527485 command_runner.go:124] >         loop
	I0526 21:23:54.358801  527485 command_runner.go:124] >         reload
	I0526 21:23:54.358807  527485 command_runner.go:124] >         loadbalance
	I0526 21:23:54.358812  527485 command_runner.go:124] >     }
	I0526 21:23:54.358819  527485 command_runner.go:124] > kind: ConfigMap
	I0526 21:23:54.358823  527485 command_runner.go:124] > metadata:
	I0526 21:23:54.358853  527485 command_runner.go:124] >   creationTimestamp: "2021-05-26T21:23:38Z"
	I0526 21:23:54.358866  527485 command_runner.go:124] >   managedFields:
	I0526 21:23:54.358875  527485 command_runner.go:124] >   - apiVersion: v1
	I0526 21:23:54.358883  527485 command_runner.go:124] >     fieldsType: FieldsV1
	I0526 21:23:54.358889  527485 command_runner.go:124] >     fieldsV1:
	I0526 21:23:54.358895  527485 command_runner.go:124] >       f:data:
	I0526 21:23:54.358901  527485 command_runner.go:124] >         .: {}
	I0526 21:23:54.358908  527485 command_runner.go:124] >         f:Corefile: {}
	I0526 21:23:54.358916  527485 command_runner.go:124] >     manager: kubeadm
	I0526 21:23:54.358923  527485 command_runner.go:124] >     operation: Update
	I0526 21:23:54.358931  527485 command_runner.go:124] >     time: "2021-05-26T21:23:38Z"
	I0526 21:23:54.358939  527485 command_runner.go:124] >   name: coredns
	I0526 21:23:54.358945  527485 command_runner.go:124] >   namespace: kube-system
	I0526 21:23:54.358953  527485 command_runner.go:124] >   resourceVersion: "260"
	I0526 21:23:54.358961  527485 command_runner.go:124] >   uid: e702ca9d-bb73-430c-8447-a824f2271d73
	I0526 21:23:54.360403  527485 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.20.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0526 21:23:54.805623  527485 command_runner.go:124] > configmap/coredns replaced
	I0526 21:23:54.808040  527485 start.go:720] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS
	I0526 21:23:54.808095  527485 start.go:209] Will wait 6m0s for node &{Name: IP:192.168.39.229 Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}
	I0526 21:23:54.810063  527485 out.go:170] * Verifying Kubernetes components...
	I0526 21:23:54.810129  527485 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0526 21:23:54.808149  527485 addons.go:335] enableAddons start: toEnable=map[], additional=[]
	I0526 21:23:54.808458  527485 cache.go:108] acquiring lock: {Name:mk0fbd6526c48f14b253d250dd93663316e68dc6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0526 21:23:54.810241  527485 addons.go:55] Setting default-storageclass=true in profile "multinode-20210526212238-510955"
	I0526 21:23:54.810268  527485 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-20210526212238-510955"
	I0526 21:23:54.810241  527485 addons.go:55] Setting storage-provisioner=true in profile "multinode-20210526212238-510955"
	I0526 21:23:54.810392  527485 addons.go:131] Setting addon storage-provisioner=true in "multinode-20210526212238-510955"
	W0526 21:23:54.810415  527485 addons.go:140] addon storage-provisioner should already be in state true
	I0526 21:23:54.810344  527485 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/minikube-local-cache-test_functional-20210526211257-510955 exists
	I0526 21:23:54.810457  527485 host.go:66] Checking if "multinode-20210526212238-510955" exists ...
	I0526 21:23:54.810470  527485 cache.go:97] cache image "minikube-local-cache-test:functional-20210526211257-510955" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/minikube-local-cache-test_functional-20210526211257-510955" took 2.019275ms
	I0526 21:23:54.810490  527485 cache.go:81] save to tar file minikube-local-cache-test:functional-20210526211257-510955 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/minikube-local-cache-test_functional-20210526211257-510955 succeeded
	I0526 21:23:54.810502  527485 cache.go:88] Successfully saved all images to host disk.
	I0526 21:23:54.810813  527485 main.go:128] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0526 21:23:54.810857  527485 main.go:128] libmachine: Launching plugin server for driver kvm2
	I0526 21:23:54.810927  527485 main.go:128] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0526 21:23:54.810967  527485 main.go:128] libmachine: Launching plugin server for driver kvm2
	I0526 21:23:54.811052  527485 main.go:128] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0526 21:23:54.811090  527485 main.go:128] libmachine: Launching plugin server for driver kvm2
	I0526 21:23:54.822544  527485 main.go:128] libmachine: Plugin server listening at address 127.0.0.1:34083
	I0526 21:23:54.823012  527485 main.go:128] libmachine: () Calling .GetVersion
	I0526 21:23:54.823693  527485 main.go:128] libmachine: Using API Version  1
	I0526 21:23:54.823729  527485 main.go:128] libmachine: () Calling .SetConfigRaw
	I0526 21:23:54.824104  527485 main.go:128] libmachine: () Calling .GetMachineName
	I0526 21:23:54.824291  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetState
	I0526 21:23:54.824813  527485 loader.go:379] Config loaded from file:  /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/kubeconfig
	I0526 21:23:54.825461  527485 kapi.go:59] client config for multinode-20210526212238-510955: &rest.Config{Host:"https://192.168.39.229:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/multinode-20210526212238-510955/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/multinode-2
0210526212238-510955/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16ac600), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0526 21:23:54.826971  527485 node_ready.go:35] waiting up to 6m0s for node "multinode-20210526212238-510955" to be "Ready" ...
	I0526 21:23:54.827050  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:23:54.827062  527485 round_trippers.go:429] Request Headers:
	I0526 21:23:54.827069  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:23:54.827077  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:23:54.828220  527485 main.go:128] libmachine: Plugin server listening at address 127.0.0.1:36233
	I0526 21:23:54.828510  527485 loader.go:379] Config loaded from file:  /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/kubeconfig
	I0526 21:23:54.828544  527485 main.go:128] libmachine: () Calling .GetVersion
	I0526 21:23:54.828965  527485 main.go:128] libmachine: Using API Version  1
	I0526 21:23:54.828983  527485 main.go:128] libmachine: () Calling .SetConfigRaw
	I0526 21:23:54.829006  527485 kapi.go:59] client config for multinode-20210526212238-510955: &rest.Config{Host:"https://192.168.39.229:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/multinode-20210526212238-510955/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/multinode-2
0210526212238-510955/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16ac600), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0526 21:23:54.829501  527485 main.go:128] libmachine: () Calling .GetMachineName
	I0526 21:23:54.829930  527485 main.go:128] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0526 21:23:54.829960  527485 main.go:128] libmachine: Launching plugin server for driver kvm2
	I0526 21:23:54.830501  527485 round_trippers.go:422] GET https://192.168.39.229:8443/apis/storage.k8s.io/v1/storageclasses
	I0526 21:23:54.830514  527485 round_trippers.go:429] Request Headers:
	I0526 21:23:54.830520  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:23:54.830524  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:23:54.831071  527485 round_trippers.go:448] Response Status: 200 OK in 3 milliseconds
	I0526 21:23:54.831089  527485 round_trippers.go:451] Response Headers:
	I0526 21:23:54.831095  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:23:54.831100  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:23:54.831105  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:23:54.831109  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:23:54 GMT
	I0526 21:23:54.831114  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:23:54.831646  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"402","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6178 chars]
	I0526 21:23:54.838612  527485 round_trippers.go:448] Response Status: 200 OK in 8 milliseconds
	I0526 21:23:54.838627  527485 round_trippers.go:451] Response Headers:
	I0526 21:23:54.838632  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:23:54.838637  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:23:54.838641  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:23:54.838645  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:23:54.838650  527485 round_trippers.go:454]     Content-Length: 109
	I0526 21:23:54.838654  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:23:54 GMT
	I0526 21:23:54.838670  527485 request.go:1107] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"452"},"items":[]}
	I0526 21:23:54.838945  527485 main.go:128] libmachine: Plugin server listening at address 127.0.0.1:41279
	I0526 21:23:54.839318  527485 main.go:128] libmachine: () Calling .GetVersion
	I0526 21:23:54.839433  527485 addons.go:131] Setting addon default-storageclass=true in "multinode-20210526212238-510955"
	W0526 21:23:54.839448  527485 addons.go:140] addon default-storageclass should already be in state true
	I0526 21:23:54.839473  527485 host.go:66] Checking if "multinode-20210526212238-510955" exists ...
	I0526 21:23:54.839790  527485 main.go:128] libmachine: Using API Version  1
	I0526 21:23:54.839809  527485 main.go:128] libmachine: () Calling .SetConfigRaw
	I0526 21:23:54.839856  527485 main.go:128] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0526 21:23:54.839888  527485 main.go:128] libmachine: Launching plugin server for driver kvm2
	I0526 21:23:54.840165  527485 main.go:128] libmachine: () Calling .GetMachineName
	I0526 21:23:54.840362  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetState
	I0526 21:23:54.840517  527485 main.go:128] libmachine: Plugin server listening at address 127.0.0.1:36599
	I0526 21:23:54.841183  527485 main.go:128] libmachine: () Calling .GetVersion
	I0526 21:23:54.841701  527485 main.go:128] libmachine: Using API Version  1
	I0526 21:23:54.841725  527485 main.go:128] libmachine: () Calling .SetConfigRaw
	I0526 21:23:54.842084  527485 main.go:128] libmachine: () Calling .GetMachineName
	I0526 21:23:54.842264  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetState
	I0526 21:23:54.844326  527485 main.go:128] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0526 21:23:54.844368  527485 main.go:128] libmachine: Launching plugin server for driver kvm2
	I0526 21:23:54.845563  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .DriverName
	I0526 21:23:54.847895  527485 out.go:170]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0526 21:23:54.848006  527485 addons.go:268] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0526 21:23:54.848021  527485 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0526 21:23:54.848041  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetSSHHostname
	I0526 21:23:54.851917  527485 main.go:128] libmachine: Plugin server listening at address 127.0.0.1:36383
	I0526 21:23:54.852300  527485 main.go:128] libmachine: () Calling .GetVersion
	I0526 21:23:54.852732  527485 main.go:128] libmachine: Using API Version  1
	I0526 21:23:54.852754  527485 main.go:128] libmachine: () Calling .SetConfigRaw
	I0526 21:23:54.853135  527485 main.go:128] libmachine: () Calling .GetMachineName
	I0526 21:23:54.853629  527485 main.go:128] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0526 21:23:54.853669  527485 main.go:128] libmachine: Launching plugin server for driver kvm2
	I0526 21:23:54.853819  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | domain multinode-20210526212238-510955 has defined MAC address 52:54:00:0c:8b:34 in network mk-multinode-20210526212238-510955
	I0526 21:23:54.854262  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:8b:34", ip: ""} in network mk-multinode-20210526212238-510955: {Iface:virbr1 ExpiryTime:2021-05-26 22:22:53 +0000 UTC Type:0 Mac:52:54:00:0c:8b:34 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:multinode-20210526212238-510955 Clientid:01:52:54:00:0c:8b:34}
	I0526 21:23:54.854293  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | domain multinode-20210526212238-510955 has defined IP address 192.168.39.229 and MAC address 52:54:00:0c:8b:34 in network mk-multinode-20210526212238-510955
	I0526 21:23:54.854386  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetSSHPort
	I0526 21:23:54.854548  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetSSHKeyPath
	I0526 21:23:54.854706  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetSSHUsername
	I0526 21:23:54.854855  527485 sshutil.go:53] new ssh client: &{IP:192.168.39.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/multinode-20210526212238-510955/id_rsa Username:docker}
	I0526 21:23:54.856100  527485 main.go:128] libmachine: Plugin server listening at address 127.0.0.1:41329
	I0526 21:23:54.856458  527485 main.go:128] libmachine: () Calling .GetVersion
	I0526 21:23:54.856902  527485 main.go:128] libmachine: Using API Version  1
	I0526 21:23:54.856930  527485 main.go:128] libmachine: () Calling .SetConfigRaw
	I0526 21:23:54.857237  527485 main.go:128] libmachine: () Calling .GetMachineName
	I0526 21:23:54.857421  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .DriverName
	I0526 21:23:54.857576  527485 ssh_runner.go:149] Run: sudo crictl images --output json
	I0526 21:23:54.857599  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetSSHHostname
	I0526 21:23:54.862859  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | domain multinode-20210526212238-510955 has defined MAC address 52:54:00:0c:8b:34 in network mk-multinode-20210526212238-510955
	I0526 21:23:54.863221  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:8b:34", ip: ""} in network mk-multinode-20210526212238-510955: {Iface:virbr1 ExpiryTime:2021-05-26 22:22:53 +0000 UTC Type:0 Mac:52:54:00:0c:8b:34 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:multinode-20210526212238-510955 Clientid:01:52:54:00:0c:8b:34}
	I0526 21:23:54.863250  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | domain multinode-20210526212238-510955 has defined IP address 192.168.39.229 and MAC address 52:54:00:0c:8b:34 in network mk-multinode-20210526212238-510955
	I0526 21:23:54.863372  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetSSHPort
	I0526 21:23:54.863534  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetSSHKeyPath
	I0526 21:23:54.863692  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetSSHUsername
	I0526 21:23:54.863834  527485 sshutil.go:53] new ssh client: &{IP:192.168.39.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/multinode-20210526212238-510955/id_rsa Username:docker}
	I0526 21:23:54.865144  527485 main.go:128] libmachine: Plugin server listening at address 127.0.0.1:38367
	I0526 21:23:54.865512  527485 main.go:128] libmachine: () Calling .GetVersion
	I0526 21:23:54.865925  527485 main.go:128] libmachine: Using API Version  1
	I0526 21:23:54.865976  527485 main.go:128] libmachine: () Calling .SetConfigRaw
	I0526 21:23:54.866265  527485 main.go:128] libmachine: () Calling .GetMachineName
	I0526 21:23:54.866451  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetState
	I0526 21:23:54.869024  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .DriverName
	I0526 21:23:54.869208  527485 addons.go:268] installing /etc/kubernetes/addons/storageclass.yaml
	I0526 21:23:54.869222  527485 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0526 21:23:54.869235  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetSSHHostname
	I0526 21:23:54.873881  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | domain multinode-20210526212238-510955 has defined MAC address 52:54:00:0c:8b:34 in network mk-multinode-20210526212238-510955
	I0526 21:23:54.874186  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:8b:34", ip: ""} in network mk-multinode-20210526212238-510955: {Iface:virbr1 ExpiryTime:2021-05-26 22:22:53 +0000 UTC Type:0 Mac:52:54:00:0c:8b:34 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:multinode-20210526212238-510955 Clientid:01:52:54:00:0c:8b:34}
	I0526 21:23:54.874206  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | domain multinode-20210526212238-510955 has defined IP address 192.168.39.229 and MAC address 52:54:00:0c:8b:34 in network mk-multinode-20210526212238-510955
	I0526 21:23:54.874371  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetSSHPort
	I0526 21:23:54.874532  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetSSHKeyPath
	I0526 21:23:54.874662  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetSSHUsername
	I0526 21:23:54.874780  527485 sshutil.go:53] new ssh client: &{IP:192.168.39.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/multinode-20210526212238-510955/id_rsa Username:docker}
	I0526 21:23:54.972917  527485 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0526 21:23:55.013015  527485 command_runner.go:124] > {
	I0526 21:23:55.013033  527485 command_runner.go:124] >   "images": [
	I0526 21:23:55.013037  527485 command_runner.go:124] >     {
	I0526 21:23:55.013046  527485 command_runner.go:124] >       "id": "sha256:6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb",
	I0526 21:23:55.013050  527485 command_runner.go:124] >       "repoTags": [
	I0526 21:23:55.013057  527485 command_runner.go:124] >         "docker.io/kindest/kindnetd:v20210326-1e038dc5"
	I0526 21:23:55.013061  527485 command_runner.go:124] >       ],
	I0526 21:23:55.013065  527485 command_runner.go:124] >       "repoDigests": [
	I0526 21:23:55.013074  527485 command_runner.go:124] >         "docker.io/kindest/kindnetd@sha256:838bc1706e38391aefaa31fd52619fe8e57ad3dfb0d0ff414d902367fcc24c3c"
	I0526 21:23:55.013078  527485 command_runner.go:124] >       ],
	I0526 21:23:55.013082  527485 command_runner.go:124] >       "size": "53960776",
	I0526 21:23:55.013087  527485 command_runner.go:124] >       "uid": null,
	I0526 21:23:55.013091  527485 command_runner.go:124] >       "username": "",
	I0526 21:23:55.013097  527485 command_runner.go:124] >       "spec": null
	I0526 21:23:55.013101  527485 command_runner.go:124] >     },
	I0526 21:23:55.013105  527485 command_runner.go:124] >     {
	I0526 21:23:55.013114  527485 command_runner.go:124] >       "id": "sha256:9a07b5b4bfac07e5cfc27f76c34516a3ad2fdfa3f683f375141fe662ef2e72db",
	I0526 21:23:55.013120  527485 command_runner.go:124] >       "repoTags": [
	I0526 21:23:55.013126  527485 command_runner.go:124] >         "docker.io/kubernetesui/dashboard:v2.1.0"
	I0526 21:23:55.013130  527485 command_runner.go:124] >       ],
	I0526 21:23:55.013134  527485 command_runner.go:124] >       "repoDigests": [
	I0526 21:23:55.013142  527485 command_runner.go:124] >         "docker.io/kubernetesui/dashboard@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6"
	I0526 21:23:55.013147  527485 command_runner.go:124] >       ],
	I0526 21:23:55.013151  527485 command_runner.go:124] >       "size": "67992170",
	I0526 21:23:55.013154  527485 command_runner.go:124] >       "uid": null,
	I0526 21:23:55.013158  527485 command_runner.go:124] >       "username": "nonroot",
	I0526 21:23:55.013162  527485 command_runner.go:124] >       "spec": null
	I0526 21:23:55.013166  527485 command_runner.go:124] >     },
	I0526 21:23:55.013172  527485 command_runner.go:124] >     {
	I0526 21:23:55.013179  527485 command_runner.go:124] >       "id": "sha256:86262685d9abb35698a4e03ed13f9ded5b97c6c85b466285e4f367e5232eeee4",
	I0526 21:23:55.013183  527485 command_runner.go:124] >       "repoTags": [
	I0526 21:23:55.013188  527485 command_runner.go:124] >         "docker.io/kubernetesui/metrics-scraper:v1.0.4"
	I0526 21:23:55.013193  527485 command_runner.go:124] >       ],
	I0526 21:23:55.013199  527485 command_runner.go:124] >       "repoDigests": [
	I0526 21:23:55.013207  527485 command_runner.go:124] >         "docker.io/kubernetesui/metrics-scraper@sha256:555981a24f184420f3be0c79d4efb6c948a85cfce84034f85a563f4151a81cbf"
	I0526 21:23:55.013212  527485 command_runner.go:124] >       ],
	I0526 21:23:55.013216  527485 command_runner.go:124] >       "size": "16020077",
	I0526 21:23:55.013220  527485 command_runner.go:124] >       "uid": null,
	I0526 21:23:55.013224  527485 command_runner.go:124] >       "username": "",
	I0526 21:23:55.013227  527485 command_runner.go:124] >       "spec": null
	I0526 21:23:55.013231  527485 command_runner.go:124] >     },
	I0526 21:23:55.013234  527485 command_runner.go:124] >     {
	I0526 21:23:55.013241  527485 command_runner.go:124] >       "id": "sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0526 21:23:55.013245  527485 command_runner.go:124] >       "repoTags": [
	I0526 21:23:55.013251  527485 command_runner.go:124] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0526 21:23:55.013254  527485 command_runner.go:124] >       ],
	I0526 21:23:55.013258  527485 command_runner.go:124] >       "repoDigests": [
	I0526 21:23:55.013266  527485 command_runner.go:124] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I0526 21:23:55.013271  527485 command_runner.go:124] >       ],
	I0526 21:23:55.013275  527485 command_runner.go:124] >       "size": "9058936",
	I0526 21:23:55.013278  527485 command_runner.go:124] >       "uid": null,
	I0526 21:23:55.013282  527485 command_runner.go:124] >       "username": "",
	I0526 21:23:55.013289  527485 command_runner.go:124] >       "spec": null
	I0526 21:23:55.013293  527485 command_runner.go:124] >     },
	I0526 21:23:55.013296  527485 command_runner.go:124] >     {
	I0526 21:23:55.013303  527485 command_runner.go:124] >       "id": "sha256:bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16",
	I0526 21:23:55.013307  527485 command_runner.go:124] >       "repoTags": [
	I0526 21:23:55.013312  527485 command_runner.go:124] >         "k8s.gcr.io/coredns:1.7.0"
	I0526 21:23:55.013318  527485 command_runner.go:124] >       ],
	I0526 21:23:55.013322  527485 command_runner.go:124] >       "repoDigests": [
	I0526 21:23:55.013332  527485 command_runner.go:124] >         "k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c"
	I0526 21:23:55.013341  527485 command_runner.go:124] >       ],
	I0526 21:23:55.013347  527485 command_runner.go:124] >       "size": "13982350",
	I0526 21:23:55.013353  527485 command_runner.go:124] >       "uid": null,
	I0526 21:23:55.013360  527485 command_runner.go:124] >       "username": "",
	I0526 21:23:55.013364  527485 command_runner.go:124] >       "spec": null
	I0526 21:23:55.013368  527485 command_runner.go:124] >     },
	I0526 21:23:55.013378  527485 command_runner.go:124] >     {
	I0526 21:23:55.013386  527485 command_runner.go:124] >       "id": "sha256:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934",
	I0526 21:23:55.013390  527485 command_runner.go:124] >       "repoTags": [
	I0526 21:23:55.013394  527485 command_runner.go:124] >         "k8s.gcr.io/etcd:3.4.13-0"
	I0526 21:23:55.013398  527485 command_runner.go:124] >       ],
	I0526 21:23:55.013402  527485 command_runner.go:124] >       "repoDigests": [
	I0526 21:23:55.013409  527485 command_runner.go:124] >         "k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2"
	I0526 21:23:55.013413  527485 command_runner.go:124] >       ],
	I0526 21:23:55.013418  527485 command_runner.go:124] >       "size": "86742272",
	I0526 21:23:55.013421  527485 command_runner.go:124] >       "uid": null,
	I0526 21:23:55.013425  527485 command_runner.go:124] >       "username": "",
	I0526 21:23:55.013429  527485 command_runner.go:124] >       "spec": null
	I0526 21:23:55.013432  527485 command_runner.go:124] >     },
	I0526 21:23:55.013435  527485 command_runner.go:124] >     {
	I0526 21:23:55.013442  527485 command_runner.go:124] >       "id": "sha256:a8c2fdb8bf76e3b014d14ce69a6a2d11044cb13b4ec3185015c582b8ad69a820",
	I0526 21:23:55.013447  527485 command_runner.go:124] >       "repoTags": [
	I0526 21:23:55.013452  527485 command_runner.go:124] >         "k8s.gcr.io/kube-apiserver:v1.20.2"
	I0526 21:23:55.013455  527485 command_runner.go:124] >       ],
	I0526 21:23:55.013459  527485 command_runner.go:124] >       "repoDigests": [
	I0526 21:23:55.013466  527485 command_runner.go:124] >         "k8s.gcr.io/kube-apiserver@sha256:465ba895d578fbc1c6e299e45689381fd01c54400beba9e8f1d7456077411411"
	I0526 21:23:55.013471  527485 command_runner.go:124] >       ],
	I0526 21:23:55.013475  527485 command_runner.go:124] >       "size": "30411317",
	I0526 21:23:55.013480  527485 command_runner.go:124] >       "uid": {
	I0526 21:23:55.013484  527485 command_runner.go:124] >         "value": "0"
	I0526 21:23:55.013488  527485 command_runner.go:124] >       },
	I0526 21:23:55.013492  527485 command_runner.go:124] >       "username": "",
	I0526 21:23:55.013496  527485 command_runner.go:124] >       "spec": null
	I0526 21:23:55.013499  527485 command_runner.go:124] >     },
	I0526 21:23:55.013502  527485 command_runner.go:124] >     {
	I0526 21:23:55.013509  527485 command_runner.go:124] >       "id": "sha256:a27166429d98e07152ca71420931142127609f715925b1607acee6ea6f0e3696",
	I0526 21:23:55.013515  527485 command_runner.go:124] >       "repoTags": [
	I0526 21:23:55.013520  527485 command_runner.go:124] >         "k8s.gcr.io/kube-controller-manager:v1.20.2"
	I0526 21:23:55.013523  527485 command_runner.go:124] >       ],
	I0526 21:23:55.013529  527485 command_runner.go:124] >       "repoDigests": [
	I0526 21:23:55.013536  527485 command_runner.go:124] >         "k8s.gcr.io/kube-controller-manager@sha256:842a071d4ad49b0018f7f7404ac8a4ddfc2bce2ce15b3f8131d89563fda36c9b"
	I0526 21:23:55.013541  527485 command_runner.go:124] >       ],
	I0526 21:23:55.013546  527485 command_runner.go:124] >       "size": "29362302",
	I0526 21:23:55.013549  527485 command_runner.go:124] >       "uid": {
	I0526 21:23:55.013553  527485 command_runner.go:124] >         "value": "0"
	I0526 21:23:55.013557  527485 command_runner.go:124] >       },
	I0526 21:23:55.013561  527485 command_runner.go:124] >       "username": "",
	I0526 21:23:55.013564  527485 command_runner.go:124] >       "spec": null
	I0526 21:23:55.013568  527485 command_runner.go:124] >     },
	I0526 21:23:55.013571  527485 command_runner.go:124] >     {
	I0526 21:23:55.013578  527485 command_runner.go:124] >       "id": "sha256:43154ddb57a83de3068fe603e9c7393e7d2b77cb18d9e0daf869f74b1b4079c0",
	I0526 21:23:55.013583  527485 command_runner.go:124] >       "repoTags": [
	I0526 21:23:55.013588  527485 command_runner.go:124] >         "k8s.gcr.io/kube-proxy:v1.20.2"
	I0526 21:23:55.013591  527485 command_runner.go:124] >       ],
	I0526 21:23:55.013595  527485 command_runner.go:124] >       "repoDigests": [
	I0526 21:23:55.013602  527485 command_runner.go:124] >         "k8s.gcr.io/kube-proxy@sha256:326fe8a4508a5db91cf234c4867eff5ba458bc4107c2a7e15c827a74faa19be9"
	I0526 21:23:55.013607  527485 command_runner.go:124] >       ],
	I0526 21:23:55.013611  527485 command_runner.go:124] >       "size": "49539606",
	I0526 21:23:55.013615  527485 command_runner.go:124] >       "uid": null,
	I0526 21:23:55.013619  527485 command_runner.go:124] >       "username": "",
	I0526 21:23:55.013622  527485 command_runner.go:124] >       "spec": null
	I0526 21:23:55.013626  527485 command_runner.go:124] >     },
	I0526 21:23:55.013629  527485 command_runner.go:124] >     {
	I0526 21:23:55.013636  527485 command_runner.go:124] >       "id": "sha256:ed2c44fbdd78b69a0981ab3c57ebce2798e4a4b2b5dda2fabc720f9957d4869f",
	I0526 21:23:55.013641  527485 command_runner.go:124] >       "repoTags": [
	I0526 21:23:55.013646  527485 command_runner.go:124] >         "k8s.gcr.io/kube-scheduler:v1.20.2"
	I0526 21:23:55.013649  527485 command_runner.go:124] >       ],
	I0526 21:23:55.013653  527485 command_runner.go:124] >       "repoDigests": [
	I0526 21:23:55.013660  527485 command_runner.go:124] >         "k8s.gcr.io/kube-scheduler@sha256:304b3d70497bd62498f19f82f9ef164d38948e5ae94966690abfe9d1858867e2"
	I0526 21:23:55.013666  527485 command_runner.go:124] >       ],
	I0526 21:23:55.013673  527485 command_runner.go:124] >       "size": "14012937",
	I0526 21:23:55.013676  527485 command_runner.go:124] >       "uid": {
	I0526 21:23:55.013680  527485 command_runner.go:124] >         "value": "0"
	I0526 21:23:55.013684  527485 command_runner.go:124] >       },
	I0526 21:23:55.013690  527485 command_runner.go:124] >       "username": "",
	I0526 21:23:55.013694  527485 command_runner.go:124] >       "spec": null
	I0526 21:23:55.013697  527485 command_runner.go:124] >     },
	I0526 21:23:55.013701  527485 command_runner.go:124] >     {
	I0526 21:23:55.013713  527485 command_runner.go:124] >       "id": "sha256:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c",
	I0526 21:23:55.013720  527485 command_runner.go:124] >       "repoTags": [
	I0526 21:23:55.013724  527485 command_runner.go:124] >         "k8s.gcr.io/pause:3.2"
	I0526 21:23:55.013727  527485 command_runner.go:124] >       ],
	I0526 21:23:55.013731  527485 command_runner.go:124] >       "repoDigests": [
	I0526 21:23:55.013738  527485 command_runner.go:124] >         "k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f"
	I0526 21:23:55.013742  527485 command_runner.go:124] >       ],
	I0526 21:23:55.013746  527485 command_runner.go:124] >       "size": "299513",
	I0526 21:23:55.013750  527485 command_runner.go:124] >       "uid": null,
	I0526 21:23:55.013754  527485 command_runner.go:124] >       "username": "",
	I0526 21:23:55.013759  527485 command_runner.go:124] >       "spec": null
	I0526 21:23:55.013762  527485 command_runner.go:124] >     }
	I0526 21:23:55.013765  527485 command_runner.go:124] >   ]
	I0526 21:23:55.013768  527485 command_runner.go:124] > }
	I0526 21:23:55.013872  527485 containerd.go:566] couldn't find preloaded image for "docker.io/minikube-local-cache-test:functional-20210526211257-510955". assuming images are not preloaded.
	I0526 21:23:55.013886  527485 cache_images.go:78] LoadImages start: [minikube-local-cache-test:functional-20210526211257-510955]
	I0526 21:23:55.013935  527485 image.go:162] retrieving image: minikube-local-cache-test:functional-20210526211257-510955
	I0526 21:23:55.013951  527485 image.go:168] checking repository: index.docker.io/library/minikube-local-cache-test
	I0526 21:23:55.034685  527485 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	W0526 21:23:55.070096  527485 image.go:175] remote: HEAD https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210526211257-510955: unexpected status code 401 Unauthorized (HEAD responses have no body, use GET for details)
	I0526 21:23:55.070120  527485 image.go:176] short name: minikube-local-cache-test:functional-20210526211257-510955
	I0526 21:23:55.071089  527485 image.go:204] daemon lookup for minikube-local-cache-test:functional-20210526211257-510955: Error response from daemon: reference does not exist
	W0526 21:23:55.119209  527485 image.go:214] authn lookup for minikube-local-cache-test:functional-20210526211257-510955 (trying anon): GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210526211257-510955: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]]
	I0526 21:23:55.160352  527485 image.go:218] remote lookup for minikube-local-cache-test:functional-20210526211257-510955: GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210526211257-510955: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]]
	I0526 21:23:55.160396  527485 image.go:95] error retrieve Image minikube-local-cache-test:functional-20210526211257-510955 ref GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210526211257-510955: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]] 
	I0526 21:23:55.160422  527485 cache_images.go:106] "minikube-local-cache-test:functional-20210526211257-510955" needs transfer: got empty img digest "" for minikube-local-cache-test:functional-20210526211257-510955
	I0526 21:23:55.160451  527485 cri.go:205] Removing image: minikube-local-cache-test:functional-20210526211257-510955
	I0526 21:23:55.160496  527485 ssh_runner.go:149] Run: which crictl
	I0526 21:23:55.304580  527485 command_runner.go:124] > serviceaccount/storage-provisioner created
	I0526 21:23:55.310624  527485 command_runner.go:124] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0526 21:23:55.327245  527485 command_runner.go:124] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0526 21:23:55.334462  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:23:55.334481  527485 round_trippers.go:429] Request Headers:
	I0526 21:23:55.334487  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:23:55.334491  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:23:55.338693  527485 round_trippers.go:448] Response Status: 200 OK in 4 milliseconds
	I0526 21:23:55.338713  527485 round_trippers.go:451] Response Headers:
	I0526 21:23:55.338719  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:23:55.338724  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:23:55.338729  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:23:55.338733  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:23:55.338738  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:23:55 GMT
	I0526 21:23:55.338958  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"402","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6178 chars]
	I0526 21:23:55.344583  527485 command_runner.go:124] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0526 21:23:55.370748  527485 command_runner.go:124] > endpoints/k8s.io-minikube-hostpath created
	I0526 21:23:55.388886  527485 command_runner.go:124] > pod/storage-provisioner created
	I0526 21:23:55.396106  527485 main.go:128] libmachine: Making call to close driver server
	I0526 21:23:55.396128  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .Close
	I0526 21:23:55.396382  527485 main.go:128] libmachine: Successfully made call to close driver server
	I0526 21:23:55.396402  527485 main.go:128] libmachine: Making call to close connection to plugin binary
	I0526 21:23:55.396412  527485 main.go:128] libmachine: Making call to close driver server
	I0526 21:23:55.396421  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .Close
	I0526 21:23:55.396691  527485 main.go:128] libmachine: Successfully made call to close driver server
	I0526 21:23:55.396706  527485 main.go:128] libmachine: Making call to close connection to plugin binary
	I0526 21:23:55.482536  527485 command_runner.go:124] > storageclass.storage.k8s.io/standard created
	I0526 21:23:55.482581  527485 main.go:128] libmachine: Making call to close driver server
	I0526 21:23:55.482589  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .Close
	I0526 21:23:55.482617  527485 command_runner.go:124] > /bin/crictl
	I0526 21:23:55.482694  527485 ssh_runner.go:149] Run: sudo /bin/crictl rmi minikube-local-cache-test:functional-20210526211257-510955
	I0526 21:23:55.482863  527485 main.go:128] libmachine: Successfully made call to close driver server
	I0526 21:23:55.482882  527485 main.go:128] libmachine: Making call to close connection to plugin binary
	I0526 21:23:55.482897  527485 main.go:128] libmachine: Making call to close driver server
	I0526 21:23:55.482903  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | Closing plugin on server side
	I0526 21:23:55.482908  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .Close
	I0526 21:23:55.483131  527485 main.go:128] libmachine: Successfully made call to close driver server
	I0526 21:23:55.483149  527485 main.go:128] libmachine: Making call to close connection to plugin binary
	I0526 21:23:55.483160  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | Closing plugin on server side
	I0526 21:23:55.483176  527485 main.go:128] libmachine: Making call to close driver server
	I0526 21:23:55.483190  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .Close
	I0526 21:23:55.483441  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | Closing plugin on server side
	I0526 21:23:55.483466  527485 main.go:128] libmachine: Successfully made call to close driver server
	I0526 21:23:55.483482  527485 main.go:128] libmachine: Making call to close connection to plugin binary
	I0526 21:23:55.485401  527485 out.go:170] * Enabled addons: storage-provisioner, default-storageclass
	I0526 21:23:55.485427  527485 addons.go:337] enableAddons completed in 677.290125ms
	I0526 21:23:55.502787  527485 command_runner.go:124] ! time="2021-05-26T21:23:55Z" level=error msg="no such image minikube-local-cache-test:functional-20210526211257-510955"
	I0526 21:23:55.502807  527485 command_runner.go:124] ! time="2021-05-26T21:23:55Z" level=fatal msg="unable to remove the image(s)"
	I0526 21:23:55.502944  527485 cache_images.go:279] Loading image from: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/minikube-local-cache-test_functional-20210526211257-510955
	I0526 21:23:55.502981  527485 vm_assets.go:98] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/minikube-local-cache-test_functional-20210526211257-510955 -> /var/lib/minikube/images/minikube-local-cache-test_functional-20210526211257-510955
	I0526 21:23:55.503045  527485 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/minikube-local-cache-test_functional-20210526211257-510955
	I0526 21:23:55.507769  527485 command_runner.go:124] ! stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210526211257-510955': No such file or directory
	I0526 21:23:55.507806  527485 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210526211257-510955: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/minikube-local-cache-test_functional-20210526211257-510955: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210526211257-510955': No such file or directory
	I0526 21:23:55.507826  527485 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/minikube-local-cache-test_functional-20210526211257-510955 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210526211257-510955 (5120 bytes)
	I0526 21:23:55.526757  527485 containerd.go:260] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210526211257-510955
	I0526 21:23:55.526799  527485 ssh_runner.go:149] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/minikube-local-cache-test_functional-20210526211257-510955
	I0526 21:23:55.721424  527485 command_runner.go:124] > unpacking docker.io/library/minikube-local-cache-test:functional-20210526211257-510955 (sha256:d8b8bd0a35bb7de49f0a81841d103dd430b2bd6e4ca4d65facee12d3e0605733)...done
	I0526 21:23:55.722976  527485 cache_images.go:308] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/minikube-local-cache-test_functional-20210526211257-510955 from cache
	I0526 21:23:55.723013  527485 cache_images.go:113] Successfully loaded all cached images
	I0526 21:23:55.723024  527485 cache_images.go:82] LoadImages completed in 709.129952ms
	I0526 21:23:55.723038  527485 cache_images.go:252] succeeded pushing to: multinode-20210526212238-510955
	I0526 21:23:55.723045  527485 cache_images.go:253] failed pushing to: 
	I0526 21:23:55.723070  527485 main.go:128] libmachine: Making call to close driver server
	I0526 21:23:55.723087  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .Close
	I0526 21:23:55.723370  527485 main.go:128] libmachine: Successfully made call to close driver server
	I0526 21:23:55.723392  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | Closing plugin on server side
	I0526 21:23:55.723392  527485 main.go:128] libmachine: Making call to close connection to plugin binary
	I0526 21:23:55.723433  527485 main.go:128] libmachine: Making call to close driver server
	I0526 21:23:55.723449  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .Close
	I0526 21:23:55.723664  527485 main.go:128] libmachine: Successfully made call to close driver server
	I0526 21:23:55.723681  527485 main.go:128] libmachine: Making call to close connection to plugin binary
	I0526 21:23:55.723707  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | Closing plugin on server side
	I0526 21:23:55.834205  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:23:55.834222  527485 round_trippers.go:429] Request Headers:
	I0526 21:23:55.834229  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:23:55.834235  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:23:55.836956  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:23:55.836972  527485 round_trippers.go:451] Response Headers:
	I0526 21:23:55.836977  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:23:55 GMT
	I0526 21:23:55.836982  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:23:55.836986  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:23:55.836990  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:23:55.836995  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:23:55.837381  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"402","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6178 chars]
	I0526 21:23:56.334283  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:23:56.334299  527485 round_trippers.go:429] Request Headers:
	I0526 21:23:56.334304  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:23:56.334308  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:23:56.337012  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:23:56.337033  527485 round_trippers.go:451] Response Headers:
	I0526 21:23:56.337040  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:23:56.337045  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:23:56 GMT
	I0526 21:23:56.337050  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:23:56.337054  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:23:56.337057  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:23:56.337340  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"402","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6178 chars]
	I0526 21:23:56.834186  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:23:56.834209  527485 round_trippers.go:429] Request Headers:
	I0526 21:23:56.834215  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:23:56.834219  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:23:56.836716  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:23:56.836732  527485 round_trippers.go:451] Response Headers:
	I0526 21:23:56.836735  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:23:56.836739  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:23:56.836742  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:23:56.836745  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:23:56.836748  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:23:56 GMT
	I0526 21:23:56.837308  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"402","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6178 chars]
	I0526 21:23:56.837545  527485 node_ready.go:58] node "multinode-20210526212238-510955" has status "Ready":"False"
	I0526 21:23:57.334234  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:23:57.334265  527485 round_trippers.go:429] Request Headers:
	I0526 21:23:57.334273  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:23:57.334278  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:23:57.337296  527485 round_trippers.go:448] Response Status: 200 OK in 3 milliseconds
	I0526 21:23:57.337321  527485 round_trippers.go:451] Response Headers:
	I0526 21:23:57.337328  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:23:57.337333  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:23:57.337338  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:23:57.337343  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:23:57.337361  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:23:57 GMT
	I0526 21:23:57.337978  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"402","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6178 chars]
	I0526 21:23:57.833856  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:23:57.833883  527485 round_trippers.go:429] Request Headers:
	I0526 21:23:57.833890  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:23:57.833895  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:23:57.837012  527485 round_trippers.go:448] Response Status: 200 OK in 3 milliseconds
	I0526 21:23:57.837035  527485 round_trippers.go:451] Response Headers:
	I0526 21:23:57.837048  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:23:57.837053  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:23:57.837057  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:23:57.837062  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:23:57.837066  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:23:57 GMT
	I0526 21:23:57.837668  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"402","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6178 chars]
	I0526 21:23:58.334394  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:23:58.334420  527485 round_trippers.go:429] Request Headers:
	I0526 21:23:58.334427  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:23:58.334433  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:23:58.336766  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:23:58.336787  527485 round_trippers.go:451] Response Headers:
	I0526 21:23:58.336791  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:23:58.336794  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:23:58.336798  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:23:58 GMT
	I0526 21:23:58.336801  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:23:58.336808  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:23:58.336932  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"402","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6178 chars]
	I0526 21:23:58.833873  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:23:58.833903  527485 round_trippers.go:429] Request Headers:
	I0526 21:23:58.833908  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:23:58.833912  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:23:58.836708  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:23:58.836730  527485 round_trippers.go:451] Response Headers:
	I0526 21:23:58.836736  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:23:58.836740  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:23:58.836744  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:23:58 GMT
	I0526 21:23:58.836749  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:23:58.836752  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:23:58.836927  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"402","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6178 chars]
	I0526 21:23:59.334230  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:23:59.334253  527485 round_trippers.go:429] Request Headers:
	I0526 21:23:59.334258  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:23:59.334262  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:23:59.337146  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:23:59.337164  527485 round_trippers.go:451] Response Headers:
	I0526 21:23:59.337170  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:23:59.337175  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:23:59 GMT
	I0526 21:23:59.337181  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:23:59.337186  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:23:59.337191  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:23:59.337362  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"402","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6178 chars]
	I0526 21:23:59.337698  527485 node_ready.go:58] node "multinode-20210526212238-510955" has status "Ready":"False"
	I0526 21:23:59.833620  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:23:59.833640  527485 round_trippers.go:429] Request Headers:
	I0526 21:23:59.833645  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:23:59.833649  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:23:59.836447  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:23:59.836466  527485 round_trippers.go:451] Response Headers:
	I0526 21:23:59.836472  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:23:59.836476  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:23:59.836481  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:23:59 GMT
	I0526 21:23:59.836485  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:23:59.836490  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:23:59.836772  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"402","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6178 chars]
	I0526 21:24:00.333560  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:00.333597  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:00.333610  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:00.333620  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:00.336524  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:00.336538  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:00.336544  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:00 GMT
	I0526 21:24:00.336549  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:00.336553  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:00.336561  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:00.336565  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:00.337143  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"402","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6178 chars]
	I0526 21:24:00.834085  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:00.834110  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:00.834115  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:00.834120  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:00.837544  527485 round_trippers.go:448] Response Status: 200 OK in 3 milliseconds
	I0526 21:24:00.837560  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:00.837566  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:00.837570  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:00 GMT
	I0526 21:24:00.837573  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:00.837576  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:00.837580  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:00.837949  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"402","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6178 chars]
	I0526 21:24:01.333667  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:01.333707  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:01.333720  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:01.333730  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:01.338441  527485 round_trippers.go:448] Response Status: 200 OK in 4 milliseconds
	I0526 21:24:01.338459  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:01.338465  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:01 GMT
	I0526 21:24:01.338469  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:01.338473  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:01.338477  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:01.338482  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:01.338551  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"402","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6178 chars]
	I0526 21:24:01.338779  527485 node_ready.go:58] node "multinode-20210526212238-510955" has status "Ready":"False"
	I0526 21:24:01.834397  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:01.834414  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:01.834419  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:01.834423  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:01.839473  527485 round_trippers.go:448] Response Status: 200 OK in 5 milliseconds
	I0526 21:24:01.839495  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:01.839501  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:01.839508  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:01.839512  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:01.839517  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:01 GMT
	I0526 21:24:01.839523  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:01.839638  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"402","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6178 chars]
	I0526 21:24:02.334370  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:02.334390  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:02.334394  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:02.334398  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:02.336880  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:02.336895  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:02.336900  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:02 GMT
	I0526 21:24:02.336905  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:02.336909  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:02.336913  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:02.336918  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:02.337231  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"402","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6178 chars]
	I0526 21:24:02.834118  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:02.834141  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:02.834150  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:02.834158  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:02.836416  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:02.836432  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:02.836438  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:02.836441  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:02.836444  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:02 GMT
	I0526 21:24:02.836447  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:02.836452  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:02.836668  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"402","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6178 chars]
	I0526 21:24:03.334415  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:03.334435  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:03.334442  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:03.334448  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:03.337266  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:03.337283  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:03.337289  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:03.337293  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:03 GMT
	I0526 21:24:03.337297  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:03.337312  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:03.337316  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:03.337467  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"402","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6178 chars]
	I0526 21:24:03.833582  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:03.833623  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:03.833636  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:03.833647  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:03.836804  527485 round_trippers.go:448] Response Status: 200 OK in 3 milliseconds
	I0526 21:24:03.836821  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:03.836825  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:03.836829  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:03 GMT
	I0526 21:24:03.836832  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:03.836835  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:03.836838  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:03.837289  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"402","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6178 chars]
	I0526 21:24:03.837518  527485 node_ready.go:58] node "multinode-20210526212238-510955" has status "Ready":"False"
	I0526 21:24:04.333679  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:04.333734  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:04.333753  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:04.333776  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:04.336470  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:04.336488  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:04.336497  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:04.336501  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:04.336506  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:04.336510  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:04 GMT
	I0526 21:24:04.336517  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:04.336603  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"477","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 5994 chars]
	I0526 21:24:04.336882  527485 node_ready.go:49] node "multinode-20210526212238-510955" has status "Ready":"True"
	I0526 21:24:04.336901  527485 node_ready.go:38] duration metric: took 9.509909886s waiting for node "multinode-20210526212238-510955" to be "Ready" ...
	I0526 21:24:04.336912  527485 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0526 21:24:04.336985  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods
	I0526 21:24:04.337006  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:04.337013  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:04.337019  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:04.339707  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:04.339722  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:04.339727  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:04.339732  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:04.339735  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:04.339738  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:04.339742  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:04 GMT
	I0526 21:24:04.340833  527485 request.go:1107] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"478"},"items":[{"metadata":{"name":"coredns-74ff55c5b-tw67b","generateName":"coredns-74ff55c5b-","namespace":"kube-system","uid":"a0522c32-9960-4c21-8a5a-d0b137009166","resourceVersion":"421","creationTimestamp":"2021-05-26T21:23:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"74ff55c5b"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-74ff55c5b","uid":"605a8716-bb95-4dc9-a518-ec6d2a3d080e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"605a8716-bb95-4dc9-a518-ec6d2a3d080e\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},
"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"n [truncated 50612 chars]
	I0526 21:24:04.348904  527485 pod_ready.go:78] waiting up to 6m0s for pod "coredns-74ff55c5b-tw67b" in "kube-system" namespace to be "Ready" ...
	I0526 21:24:04.348965  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/coredns-74ff55c5b-tw67b
	I0526 21:24:04.348971  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:04.348976  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:04.348980  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:04.351219  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:04.351232  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:04.351237  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:04.351242  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:04.351247  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:04.351251  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:04.351255  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:04 GMT
	I0526 21:24:04.351324  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-74ff55c5b-tw67b","generateName":"coredns-74ff55c5b-","namespace":"kube-system","uid":"a0522c32-9960-4c21-8a5a-d0b137009166","resourceVersion":"421","creationTimestamp":"2021-05-26T21:23:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"74ff55c5b"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-74ff55c5b","uid":"605a8716-bb95-4dc9-a518-ec6d2a3d080e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"605a8716-bb95-4dc9-a518-ec6d2a3d080e\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:
containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{ [truncated 4425 chars]
	I0526 21:24:04.856942  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/coredns-74ff55c5b-tw67b
	I0526 21:24:04.856987  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:04.857002  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:04.857014  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:04.859771  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:04.859786  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:04.859790  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:04.859794  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:04.859797  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:04.859800  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:04.859802  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:04 GMT
	I0526 21:24:04.860245  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-74ff55c5b-tw67b","generateName":"coredns-74ff55c5b-","namespace":"kube-system","uid":"a0522c32-9960-4c21-8a5a-d0b137009166","resourceVersion":"421","creationTimestamp":"2021-05-26T21:23:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"74ff55c5b"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-74ff55c5b","uid":"605a8716-bb95-4dc9-a518-ec6d2a3d080e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"605a8716-bb95-4dc9-a518-ec6d2a3d080e\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:
containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{ [truncated 4425 chars]
	I0526 21:24:05.356468  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/coredns-74ff55c5b-tw67b
	I0526 21:24:05.356512  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:05.356524  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:05.356535  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:05.358567  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:05.358589  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:05.358595  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:05.358600  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:05 GMT
	I0526 21:24:05.358604  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:05.358608  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:05.358612  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:05.359053  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-74ff55c5b-tw67b","generateName":"coredns-74ff55c5b-","namespace":"kube-system","uid":"a0522c32-9960-4c21-8a5a-d0b137009166","resourceVersion":"421","creationTimestamp":"2021-05-26T21:23:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"74ff55c5b"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-74ff55c5b","uid":"605a8716-bb95-4dc9-a518-ec6d2a3d080e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"605a8716-bb95-4dc9-a518-ec6d2a3d080e\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:
containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{ [truncated 4425 chars]
	I0526 21:24:05.857110  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/coredns-74ff55c5b-tw67b
	I0526 21:24:05.857153  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:05.857166  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:05.857177  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:05.861312  527485 round_trippers.go:448] Response Status: 200 OK in 4 milliseconds
	I0526 21:24:05.861331  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:05.861337  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:05 GMT
	I0526 21:24:05.861344  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:05.861348  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:05.861352  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:05.861357  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:05.862236  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-74ff55c5b-tw67b","generateName":"coredns-74ff55c5b-","namespace":"kube-system","uid":"a0522c32-9960-4c21-8a5a-d0b137009166","resourceVersion":"421","creationTimestamp":"2021-05-26T21:23:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"74ff55c5b"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-74ff55c5b","uid":"605a8716-bb95-4dc9-a518-ec6d2a3d080e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"605a8716-bb95-4dc9-a518-ec6d2a3d080e\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:
containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{ [truncated 4425 chars]
	I0526 21:24:06.357234  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/coredns-74ff55c5b-tw67b
	I0526 21:24:06.357280  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:06.357294  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:06.357304  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:06.360607  527485 round_trippers.go:448] Response Status: 200 OK in 3 milliseconds
	I0526 21:24:06.360627  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:06.360633  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:06.360638  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:06.360644  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:06.360649  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:06.360654  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:06 GMT
	I0526 21:24:06.360824  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-74ff55c5b-tw67b","generateName":"coredns-74ff55c5b-","namespace":"kube-system","uid":"a0522c32-9960-4c21-8a5a-d0b137009166","resourceVersion":"421","creationTimestamp":"2021-05-26T21:23:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"74ff55c5b"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-74ff55c5b","uid":"605a8716-bb95-4dc9-a518-ec6d2a3d080e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"605a8716-bb95-4dc9-a518-ec6d2a3d080e\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:
containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{ [truncated 4425 chars]
	I0526 21:24:06.361411  527485 pod_ready.go:102] pod "coredns-74ff55c5b-tw67b" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-05-26 21:23:53 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0526 21:24:06.856568  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/coredns-74ff55c5b-tw67b
	I0526 21:24:06.856608  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:06.856639  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:06.856661  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:06.858795  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:06.858812  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:06.858816  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:06.858827  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:06.858838  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:06 GMT
	I0526 21:24:06.858844  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:06.858857  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:06.859182  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-74ff55c5b-tw67b","generateName":"coredns-74ff55c5b-","namespace":"kube-system","uid":"a0522c32-9960-4c21-8a5a-d0b137009166","resourceVersion":"421","creationTimestamp":"2021-05-26T21:23:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"74ff55c5b"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-74ff55c5b","uid":"605a8716-bb95-4dc9-a518-ec6d2a3d080e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"605a8716-bb95-4dc9-a518-ec6d2a3d080e\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:
containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{ [truncated 4425 chars]
	I0526 21:24:07.357084  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/coredns-74ff55c5b-tw67b
	I0526 21:24:07.357102  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:07.357107  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:07.357111  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:07.359624  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:07.359644  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:07.359650  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:07.359655  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:07.359661  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:07.359667  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:07.359671  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:07 GMT
	I0526 21:24:07.359972  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-74ff55c5b-tw67b","generateName":"coredns-74ff55c5b-","namespace":"kube-system","uid":"a0522c32-9960-4c21-8a5a-d0b137009166","resourceVersion":"421","creationTimestamp":"2021-05-26T21:23:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"74ff55c5b"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-74ff55c5b","uid":"605a8716-bb95-4dc9-a518-ec6d2a3d080e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"605a8716-bb95-4dc9-a518-ec6d2a3d080e\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:
containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{ [truncated 4425 chars]
	I0526 21:24:07.856713  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/coredns-74ff55c5b-tw67b
	I0526 21:24:07.856733  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:07.856738  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:07.856742  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:07.859475  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:07.859491  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:07.859497  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:07 GMT
	I0526 21:24:07.859501  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:07.859506  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:07.859510  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:07.859516  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:07.859595  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-74ff55c5b-tw67b","generateName":"coredns-74ff55c5b-","namespace":"kube-system","uid":"a0522c32-9960-4c21-8a5a-d0b137009166","resourceVersion":"421","creationTimestamp":"2021-05-26T21:23:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"74ff55c5b"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-74ff55c5b","uid":"605a8716-bb95-4dc9-a518-ec6d2a3d080e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"605a8716-bb95-4dc9-a518-ec6d2a3d080e\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:
containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{ [truncated 4425 chars]
	I0526 21:24:08.356263  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/coredns-74ff55c5b-tw67b
	I0526 21:24:08.356286  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:08.356291  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:08.356302  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:08.358596  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:08.358613  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:08.358618  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:08.358625  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:08 GMT
	I0526 21:24:08.358629  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:08.358633  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:08.358637  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:08.359132  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-74ff55c5b-tw67b","generateName":"coredns-74ff55c5b-","namespace":"kube-system","uid":"a0522c32-9960-4c21-8a5a-d0b137009166","resourceVersion":"421","creationTimestamp":"2021-05-26T21:23:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"74ff55c5b"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-74ff55c5b","uid":"605a8716-bb95-4dc9-a518-ec6d2a3d080e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"605a8716-bb95-4dc9-a518-ec6d2a3d080e\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:
containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{ [truncated 4425 chars]
	I0526 21:24:08.857128  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/coredns-74ff55c5b-tw67b
	I0526 21:24:08.857147  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:08.857152  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:08.857156  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:08.859899  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:08.859917  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:08.859921  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:08.859925  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:08.859928  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:08.859933  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:08.859939  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:08 GMT
	I0526 21:24:08.860080  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-74ff55c5b-tw67b","generateName":"coredns-74ff55c5b-","namespace":"kube-system","uid":"a0522c32-9960-4c21-8a5a-d0b137009166","resourceVersion":"421","creationTimestamp":"2021-05-26T21:23:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"74ff55c5b"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-74ff55c5b","uid":"605a8716-bb95-4dc9-a518-ec6d2a3d080e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"605a8716-bb95-4dc9-a518-ec6d2a3d080e\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:
containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{ [truncated 4425 chars]
	I0526 21:24:08.860329  527485 pod_ready.go:102] pod "coredns-74ff55c5b-tw67b" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-05-26 21:23:53 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0526 21:24:09.356213  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/coredns-74ff55c5b-tw67b
	I0526 21:24:09.356243  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:09.356251  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:09.356256  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:09.360376  527485 round_trippers.go:448] Response Status: 200 OK in 4 milliseconds
	I0526 21:24:09.360393  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:09.360398  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:09.360401  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:09 GMT
	I0526 21:24:09.360404  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:09.360407  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:09.360411  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:09.361262  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-74ff55c5b-tw67b","generateName":"coredns-74ff55c5b-","namespace":"kube-system","uid":"a0522c32-9960-4c21-8a5a-d0b137009166","resourceVersion":"485","creationTimestamp":"2021-05-26T21:23:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"74ff55c5b"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-74ff55c5b","uid":"605a8716-bb95-4dc9-a518-ec6d2a3d080e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"605a8716-bb95-4dc9-a518-ec6d2a3d080e\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:
containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{ [truncated 5675 chars]
	I0526 21:24:09.361595  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:09.361610  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:09.361615  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:09.361618  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:09.363586  527485 round_trippers.go:448] Response Status: 200 OK in 1 milliseconds
	I0526 21:24:09.363598  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:09.363603  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:09.363608  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:09.363612  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:09.363616  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:09 GMT
	I0526 21:24:09.363621  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:09.364206  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"477","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 5994 chars]
	I0526 21:24:09.857093  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/coredns-74ff55c5b-tw67b
	I0526 21:24:09.857121  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:09.857134  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:09.857138  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:09.860493  527485 round_trippers.go:448] Response Status: 200 OK in 3 milliseconds
	I0526 21:24:09.860508  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:09.860514  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:09.860517  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:09.860521  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:09.860525  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:09.860530  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:09 GMT
	I0526 21:24:09.860608  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-74ff55c5b-tw67b","generateName":"coredns-74ff55c5b-","namespace":"kube-system","uid":"a0522c32-9960-4c21-8a5a-d0b137009166","resourceVersion":"485","creationTimestamp":"2021-05-26T21:23:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"74ff55c5b"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-74ff55c5b","uid":"605a8716-bb95-4dc9-a518-ec6d2a3d080e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"605a8716-bb95-4dc9-a518-ec6d2a3d080e\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:
containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{ [truncated 5675 chars]
	I0526 21:24:09.860950  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:09.860964  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:09.860969  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:09.860974  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:09.863188  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:09.863204  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:09.863209  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:09.863213  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:09.863217  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:09.863219  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:09.863222  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:09 GMT
	I0526 21:24:09.863392  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"477","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 5994 chars]
	I0526 21:24:10.357197  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/coredns-74ff55c5b-tw67b
	I0526 21:24:10.357222  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:10.357228  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:10.357232  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:10.359608  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:10.359630  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:10.359634  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:10.359637  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:10.359640  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:10 GMT
	I0526 21:24:10.359646  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:10.359649  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:10.360273  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-74ff55c5b-tw67b","generateName":"coredns-74ff55c5b-","namespace":"kube-system","uid":"a0522c32-9960-4c21-8a5a-d0b137009166","resourceVersion":"493","creationTimestamp":"2021-05-26T21:23:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"74ff55c5b"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-74ff55c5b","uid":"605a8716-bb95-4dc9-a518-ec6d2a3d080e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"605a8716-bb95-4dc9-a518-ec6d2a3d080e\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:
containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{ [truncated 6009 chars]
	I0526 21:24:10.360614  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:10.360627  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:10.360631  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:10.360635  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:10.362958  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:10.362976  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:10.362980  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:10.362985  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:10.362988  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:10.362992  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:10.362996  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:10 GMT
	I0526 21:24:10.363216  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"477","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 5994 chars]
	I0526 21:24:10.857155  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/coredns-74ff55c5b-tw67b
	I0526 21:24:10.857180  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:10.857185  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:10.857190  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:10.859704  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:10.859721  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:10.859725  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:10.859728  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:10.859731  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:10.859735  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:10.859738  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:10 GMT
	I0526 21:24:10.860472  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-74ff55c5b-tw67b","generateName":"coredns-74ff55c5b-","namespace":"kube-system","uid":"a0522c32-9960-4c21-8a5a-d0b137009166","resourceVersion":"493","creationTimestamp":"2021-05-26T21:23:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"74ff55c5b"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-74ff55c5b","uid":"605a8716-bb95-4dc9-a518-ec6d2a3d080e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"605a8716-bb95-4dc9-a518-ec6d2a3d080e\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:
containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{ [truncated 6009 chars]
	I0526 21:24:10.860834  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:10.860854  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:10.860879  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:10.860892  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:10.862660  527485 round_trippers.go:448] Response Status: 200 OK in 1 milliseconds
	I0526 21:24:10.862674  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:10.862678  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:10.862684  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:10.862689  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:10.862701  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:10.862706  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:10 GMT
	I0526 21:24:10.863031  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"477","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 5994 chars]
	I0526 21:24:10.863289  527485 pod_ready.go:102] pod "coredns-74ff55c5b-tw67b" in "kube-system" namespace has status "Ready":"False"
	I0526 21:24:11.357019  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/coredns-74ff55c5b-tw67b
	I0526 21:24:11.357060  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:11.357086  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:11.357091  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:11.359266  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:11.359279  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:11.359283  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:11.359286  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:11.359289  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:11.359292  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:11.359295  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:11 GMT
	I0526 21:24:11.359589  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-74ff55c5b-tw67b","generateName":"coredns-74ff55c5b-","namespace":"kube-system","uid":"a0522c32-9960-4c21-8a5a-d0b137009166","resourceVersion":"493","creationTimestamp":"2021-05-26T21:23:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"74ff55c5b"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-74ff55c5b","uid":"605a8716-bb95-4dc9-a518-ec6d2a3d080e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"605a8716-bb95-4dc9-a518-ec6d2a3d080e\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:
containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{ [truncated 6009 chars]
	I0526 21:24:11.359886  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:11.359897  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:11.359902  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:11.359906  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:11.361862  527485 round_trippers.go:448] Response Status: 200 OK in 1 milliseconds
	I0526 21:24:11.361877  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:11.361883  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:11.361888  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:11.361893  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:11.361899  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:11.361904  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:11 GMT
	I0526 21:24:11.362186  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"477","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 5994 chars]
	I0526 21:24:11.857052  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/coredns-74ff55c5b-tw67b
	I0526 21:24:11.857071  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:11.857076  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:11.857080  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:11.859617  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:11.859634  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:11.859640  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:11.859645  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:11.859651  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:11.859661  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:11.859666  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:11 GMT
	I0526 21:24:11.859919  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-74ff55c5b-tw67b","generateName":"coredns-74ff55c5b-","namespace":"kube-system","uid":"a0522c32-9960-4c21-8a5a-d0b137009166","resourceVersion":"493","creationTimestamp":"2021-05-26T21:23:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"74ff55c5b"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-74ff55c5b","uid":"605a8716-bb95-4dc9-a518-ec6d2a3d080e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"605a8716-bb95-4dc9-a518-ec6d2a3d080e\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:
containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{ [truncated 6009 chars]
	I0526 21:24:11.860195  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:11.860207  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:11.860212  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:11.860216  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:11.862368  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:11.862382  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:11.862385  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:11.862389  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:11.862391  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:11.862394  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:11.862399  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:11 GMT
	I0526 21:24:11.862557  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"477","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 5994 chars]
	I0526 21:24:12.356389  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/coredns-74ff55c5b-tw67b
	I0526 21:24:12.356427  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:12.356446  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:12.356459  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:12.359392  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:12.359406  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:12.359410  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:12.359413  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:12.359416  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:12.359419  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:12.359422  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:12 GMT
	I0526 21:24:12.359513  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-74ff55c5b-tw67b","generateName":"coredns-74ff55c5b-","namespace":"kube-system","uid":"a0522c32-9960-4c21-8a5a-d0b137009166","resourceVersion":"493","creationTimestamp":"2021-05-26T21:23:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"74ff55c5b"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-74ff55c5b","uid":"605a8716-bb95-4dc9-a518-ec6d2a3d080e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"605a8716-bb95-4dc9-a518-ec6d2a3d080e\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:
containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{ [truncated 6009 chars]
	I0526 21:24:12.359797  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:12.359817  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:12.359821  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:12.359825  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:12.362817  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:12.362831  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:12.362835  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:12.362838  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:12.362842  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:12.362845  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:12.362848  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:12 GMT
	I0526 21:24:12.363080  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"477","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 5994 chars]
	I0526 21:24:12.857020  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/coredns-74ff55c5b-tw67b
	I0526 21:24:12.857067  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:12.857101  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:12.857107  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:12.860555  527485 round_trippers.go:448] Response Status: 200 OK in 3 milliseconds
	I0526 21:24:12.860572  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:12.860580  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:12 GMT
	I0526 21:24:12.860584  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:12.860588  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:12.860591  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:12.860595  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:12.860784  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-74ff55c5b-tw67b","generateName":"coredns-74ff55c5b-","namespace":"kube-system","uid":"a0522c32-9960-4c21-8a5a-d0b137009166","resourceVersion":"493","creationTimestamp":"2021-05-26T21:23:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"74ff55c5b"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-74ff55c5b","uid":"605a8716-bb95-4dc9-a518-ec6d2a3d080e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"605a8716-bb95-4dc9-a518-ec6d2a3d080e\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:
containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{ [truncated 6009 chars]
	I0526 21:24:12.861111  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:12.861128  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:12.861134  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:12.861140  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:12.863018  527485 round_trippers.go:448] Response Status: 200 OK in 1 milliseconds
	I0526 21:24:12.863031  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:12.863036  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:12.863041  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:12.863045  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:12 GMT
	I0526 21:24:12.863049  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:12.863053  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:12.863215  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"477","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 5994 chars]
	I0526 21:24:12.863420  527485 pod_ready.go:102] pod "coredns-74ff55c5b-tw67b" in "kube-system" namespace has status "Ready":"False"
	I0526 21:24:13.357183  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/coredns-74ff55c5b-tw67b
	I0526 21:24:13.357220  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:13.357235  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:13.357245  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:13.360137  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:13.360151  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:13.360154  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:13 GMT
	I0526 21:24:13.360157  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:13.360160  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:13.360163  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:13.360165  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:13.360614  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-74ff55c5b-tw67b","generateName":"coredns-74ff55c5b-","namespace":"kube-system","uid":"a0522c32-9960-4c21-8a5a-d0b137009166","resourceVersion":"493","creationTimestamp":"2021-05-26T21:23:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"74ff55c5b"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-74ff55c5b","uid":"605a8716-bb95-4dc9-a518-ec6d2a3d080e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"605a8716-bb95-4dc9-a518-ec6d2a3d080e\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:
containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{ [truncated 6009 chars]
	I0526 21:24:13.360893  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:13.360904  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:13.360908  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:13.360912  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:13.363029  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:13.363041  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:13.363044  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:13.363047  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:13.363050  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:13.363053  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:13 GMT
	I0526 21:24:13.363056  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:13.363173  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"477","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 5994 chars]
	I0526 21:24:13.857019  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/coredns-74ff55c5b-tw67b
	I0526 21:24:13.857035  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:13.857041  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:13.857045  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:13.860110  527485 round_trippers.go:448] Response Status: 200 OK in 3 milliseconds
	I0526 21:24:13.860121  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:13.860125  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:13 GMT
	I0526 21:24:13.860140  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:13.860143  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:13.860146  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:13.860150  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:13.860345  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-74ff55c5b-tw67b","generateName":"coredns-74ff55c5b-","namespace":"kube-system","uid":"a0522c32-9960-4c21-8a5a-d0b137009166","resourceVersion":"493","creationTimestamp":"2021-05-26T21:23:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"74ff55c5b"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-74ff55c5b","uid":"605a8716-bb95-4dc9-a518-ec6d2a3d080e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"605a8716-bb95-4dc9-a518-ec6d2a3d080e\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:
containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{ [truncated 6009 chars]
	I0526 21:24:13.860666  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:13.860681  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:13.860686  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:13.860690  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:13.862917  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:13.862928  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:13.862932  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:13.862935  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:13.862938  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:13.862941  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:13.862944  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:13 GMT
	I0526 21:24:13.863107  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"477","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 5994 chars]
	I0526 21:24:14.356355  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/coredns-74ff55c5b-tw67b
	I0526 21:24:14.356401  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:14.356414  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:14.356440  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:14.359191  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:14.359208  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:14.359212  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:14.359217  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:14.359221  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:14.359224  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:14 GMT
	I0526 21:24:14.359227  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:14.359382  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-74ff55c5b-tw67b","generateName":"coredns-74ff55c5b-","namespace":"kube-system","uid":"a0522c32-9960-4c21-8a5a-d0b137009166","resourceVersion":"500","creationTimestamp":"2021-05-26T21:23:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"74ff55c5b"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-74ff55c5b","uid":"605a8716-bb95-4dc9-a518-ec6d2a3d080e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"605a8716-bb95-4dc9-a518-ec6d2a3d080e\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:
containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{ [truncated 5780 chars]
	I0526 21:24:14.359789  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:14.359812  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:14.359819  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:14.359825  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:14.362141  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:14.362155  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:14.362161  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:14.362165  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:14.362170  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:14.362174  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:14.362177  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:14 GMT
	I0526 21:24:14.362701  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:14.362941  527485 pod_ready.go:92] pod "coredns-74ff55c5b-tw67b" in "kube-system" namespace has status "Ready":"True"
	I0526 21:24:14.362971  527485 pod_ready.go:81] duration metric: took 10.014041717s waiting for pod "coredns-74ff55c5b-tw67b" in "kube-system" namespace to be "Ready" ...
	I0526 21:24:14.362983  527485 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-20210526212238-510955" in "kube-system" namespace to be "Ready" ...
	I0526 21:24:14.363078  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:14.363089  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:14.363093  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:14.363097  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:14.365117  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:14.365130  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:14.365134  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:14.365137  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:14 GMT
	I0526 21:24:14.365140  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:14.365143  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:14.365145  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:14.365835  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:14.366143  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:14.366156  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:14.366161  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:14.366165  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:14.368454  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:14.368466  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:14.368470  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:14.368473  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:14.368476  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:14.368479  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:14.368482  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:14 GMT
	I0526 21:24:14.369060  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:14.869807  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:14.869825  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:14.869832  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:14.869844  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:14.872520  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:14.872541  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:14.872546  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:14.872551  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:14.872555  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:14.872559  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:14.872564  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:14 GMT
	I0526 21:24:14.873080  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:14.873368  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:14.873380  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:14.873385  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:14.873388  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:14.876066  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:14.876083  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:14.876088  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:14.876093  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:14.876100  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:14.876104  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:14.876108  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:14 GMT
	I0526 21:24:14.876633  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:15.370422  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:15.370441  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:15.370446  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:15.370456  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:15.372511  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:15.372523  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:15.372527  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:15.372530  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:15.372536  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:15.372539  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:15.372542  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:15 GMT
	I0526 21:24:15.372964  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:15.373230  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:15.373242  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:15.373247  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:15.373250  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:15.375520  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:15.375534  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:15.375539  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:15 GMT
	I0526 21:24:15.375544  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:15.375548  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:15.375552  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:15.375557  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:15.375887  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:15.869629  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:15.869668  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:15.869681  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:15.869692  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:15.872171  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:15.872184  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:15.872188  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:15 GMT
	I0526 21:24:15.872191  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:15.872196  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:15.872199  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:15.872202  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:15.872469  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:15.872744  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:15.872757  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:15.872761  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:15.872765  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:15.874496  527485 round_trippers.go:448] Response Status: 200 OK in 1 milliseconds
	I0526 21:24:15.874509  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:15.874513  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:15.874516  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:15.874519  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:15.874522  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:15.874525  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:15 GMT
	I0526 21:24:15.874918  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:16.369696  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:16.369716  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:16.369720  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:16.369724  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:16.372701  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:16.372717  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:16.372723  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:16.372728  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:16.372732  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:16.372736  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:16.372740  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:16 GMT
	I0526 21:24:16.373179  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:16.373418  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:16.373429  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:16.373433  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:16.373437  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:16.375785  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:16.375805  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:16.375811  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:16.375816  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:16.375821  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:16.375829  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:16.375834  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:16 GMT
	I0526 21:24:16.376395  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:16.376645  527485 pod_ready.go:102] pod "etcd-multinode-20210526212238-510955" in "kube-system" namespace has status "Ready":"False"
	I0526 21:24:16.870142  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:16.870164  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:16.870169  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:16.870173  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:16.872964  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:16.872986  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:16.872992  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:16.872997  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:16.873000  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:16.873005  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:16.873010  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:16 GMT
	I0526 21:24:16.873175  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:16.873535  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:16.873554  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:16.873561  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:16.873568  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:16.875490  527485 round_trippers.go:448] Response Status: 200 OK in 1 milliseconds
	I0526 21:24:16.875509  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:16.875515  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:16.875520  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:16.875523  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:16.875531  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:16.875543  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:16 GMT
	I0526 21:24:16.875712  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:17.369529  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:17.369567  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:17.369580  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:17.369590  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:17.372684  527485 round_trippers.go:448] Response Status: 200 OK in 3 milliseconds
	I0526 21:24:17.372699  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:17.372703  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:17.372706  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:17.372709  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:17.372713  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:17 GMT
	I0526 21:24:17.372716  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:17.373072  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:17.373318  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:17.373328  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:17.373335  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:17.373338  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:17.376353  527485 round_trippers.go:448] Response Status: 200 OK in 3 milliseconds
	I0526 21:24:17.376371  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:17.376377  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:17.376382  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:17.376387  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:17.376393  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:17 GMT
	I0526 21:24:17.376398  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:17.376756  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:17.870452  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:17.870469  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:17.870474  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:17.870478  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:17.874427  527485 round_trippers.go:448] Response Status: 200 OK in 3 milliseconds
	I0526 21:24:17.874444  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:17.874449  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:17.874454  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:17.874458  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:17.874463  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:17 GMT
	I0526 21:24:17.874467  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:17.874573  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:17.874901  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:17.874922  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:17.874929  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:17.874935  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:17.880463  527485 round_trippers.go:448] Response Status: 200 OK in 5 milliseconds
	I0526 21:24:17.880478  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:17.880481  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:17 GMT
	I0526 21:24:17.880485  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:17.880488  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:17.880490  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:17.880494  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:17.880837  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:18.369893  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:18.369914  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:18.369919  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:18.369923  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:18.371891  527485 round_trippers.go:448] Response Status: 200 OK in 1 milliseconds
	I0526 21:24:18.371905  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:18.371910  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:18.371914  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:18.371919  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:18.371923  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:18.371928  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:18 GMT
	I0526 21:24:18.372424  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:18.372720  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:18.372734  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:18.372738  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:18.372742  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:18.374386  527485 round_trippers.go:448] Response Status: 200 OK in 1 milliseconds
	I0526 21:24:18.374400  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:18.374405  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:18.374409  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:18.374413  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:18 GMT
	I0526 21:24:18.374417  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:18.374421  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:18.374780  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:18.869893  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:18.869917  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:18.869922  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:18.869925  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:18.873021  527485 round_trippers.go:448] Response Status: 200 OK in 3 milliseconds
	I0526 21:24:18.873035  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:18.873041  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:18.873045  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:18.873048  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:18 GMT
	I0526 21:24:18.873051  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:18.873054  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:18.873430  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:18.873684  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:18.873695  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:18.873699  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:18.873703  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:18.875538  527485 round_trippers.go:448] Response Status: 200 OK in 1 milliseconds
	I0526 21:24:18.875552  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:18.875557  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:18.875564  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:18.875573  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:18.875578  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:18.875586  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:18 GMT
	I0526 21:24:18.875767  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:18.875976  527485 pod_ready.go:102] pod "etcd-multinode-20210526212238-510955" in "kube-system" namespace has status "Ready":"False"
	I0526 21:24:19.369508  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:19.369543  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:19.369555  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:19.369564  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:19.371942  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:19.371957  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:19.371962  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:19.371967  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:19 GMT
	I0526 21:24:19.371971  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:19.371976  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:19.371981  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:19.372167  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:19.372453  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:19.372466  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:19.372470  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:19.372474  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:19.375361  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:19.375373  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:19.375377  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:19 GMT
	I0526 21:24:19.375381  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:19.375385  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:19.375389  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:19.375393  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:19.375829  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:19.869544  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:19.869588  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:19.869611  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:19.869626  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:19.872766  527485 round_trippers.go:448] Response Status: 200 OK in 3 milliseconds
	I0526 21:24:19.872784  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:19.872789  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:19.872794  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:19.872798  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:19.872802  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:19.872807  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:19 GMT
	I0526 21:24:19.872911  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:19.873235  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:19.873257  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:19.873264  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:19.873270  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:19.875000  527485 round_trippers.go:448] Response Status: 200 OK in 1 milliseconds
	I0526 21:24:19.875010  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:19.875015  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:19.875019  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:19.875023  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:19.875028  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:19.875032  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:19 GMT
	I0526 21:24:19.875400  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:20.370112  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:20.370137  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:20.370142  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:20.370145  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:20.372387  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:20.372407  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:20.372412  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:20.372417  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:20.372421  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:20.372425  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:20 GMT
	I0526 21:24:20.372430  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:20.373097  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:20.373411  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:20.373426  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:20.373430  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:20.373437  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:20.375520  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:20.375531  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:20.375534  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:20.375537  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:20.375540  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:20.375544  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:20.375546  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:20 GMT
	I0526 21:24:20.375769  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:20.869429  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:20.869451  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:20.869456  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:20.869460  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:20.872086  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:20.872105  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:20.872111  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:20.872116  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:20 GMT
	I0526 21:24:20.872120  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:20.872136  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:20.872145  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:20.872317  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:20.872677  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:20.872692  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:20.872697  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:20.872700  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:20.875254  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:20.875264  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:20.875267  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:20.875270  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:20 GMT
	I0526 21:24:20.875273  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:20.875276  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:20.875279  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:20.875984  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:20.876211  527485 pod_ready.go:102] pod "etcd-multinode-20210526212238-510955" in "kube-system" namespace has status "Ready":"False"
	I0526 21:24:21.369735  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:21.369762  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:21.369768  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:21.369778  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:21.372058  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:21.372075  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:21.372080  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:21.372085  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:21.372089  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:21.372094  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:21.372098  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:21 GMT
	I0526 21:24:21.372397  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:21.372747  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:21.372766  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:21.372773  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:21.372778  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:21.375050  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:21.375067  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:21.375076  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:21.375080  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:21.375083  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:21.375088  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:21.375092  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:21 GMT
	I0526 21:24:21.375473  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:21.870195  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:21.870211  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:21.870220  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:21.870232  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:21.872782  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:21.872796  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:21.872800  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:21.872803  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:21.872806  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:21.872809  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:21.872815  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:21 GMT
	I0526 21:24:21.873179  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:21.873495  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:21.873513  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:21.873520  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:21.873526  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:21.875433  527485 round_trippers.go:448] Response Status: 200 OK in 1 milliseconds
	I0526 21:24:21.875444  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:21.875448  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:21.875451  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:21.875454  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:21.875456  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:21.875459  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:21 GMT
	I0526 21:24:21.876085  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:22.369949  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:22.369969  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:22.369976  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:22.369982  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:22.372425  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:22.372442  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:22.372447  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:22.372452  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:22.372456  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:22 GMT
	I0526 21:24:22.372460  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:22.372465  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:22.372733  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:22.373291  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:22.373309  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:22.373313  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:22.373317  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:22.374939  527485 round_trippers.go:448] Response Status: 200 OK in 1 milliseconds
	I0526 21:24:22.374952  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:22.374958  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:22.374962  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:22.374967  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:22.374972  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:22.374977  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:22 GMT
	I0526 21:24:22.375166  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:22.870126  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:22.870148  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:22.870155  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:22.870159  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:22.872882  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:22.872897  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:22.872902  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:22.872907  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:22.872912  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:22.872916  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:22.872921  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:22 GMT
	I0526 21:24:22.873421  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:22.873664  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:22.873674  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:22.873678  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:22.873682  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:22.875830  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:22.875843  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:22.875847  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:22.875850  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:22.875852  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:22 GMT
	I0526 21:24:22.875855  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:22.875858  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:22.876041  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:22.876304  527485 pod_ready.go:102] pod "etcd-multinode-20210526212238-510955" in "kube-system" namespace has status "Ready":"False"
	I0526 21:24:23.369961  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:23.370000  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:23.370005  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:23.370009  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:23.372425  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:23.372437  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:23.372442  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:23.372447  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:23.372451  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:23.372456  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:23.372459  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:23 GMT
	I0526 21:24:23.372740  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:23.373035  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:23.373048  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:23.373053  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:23.373057  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:23.375242  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:23.375256  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:23.375261  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:23.375266  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:23.375270  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:23.375275  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:23.375279  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:23 GMT
	I0526 21:24:23.375675  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:23.869595  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:23.869619  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:23.869625  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:23.869631  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:23.871916  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:23.871937  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:23.871942  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:23.871946  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:23.871951  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:23.871955  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:23 GMT
	I0526 21:24:23.871959  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:23.872291  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:23.872581  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:23.872594  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:23.872598  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:23.872602  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:23.874793  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:23.874814  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:23.874820  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:23.874824  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:23.874828  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:23.874833  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:23.874838  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:23 GMT
	I0526 21:24:23.875289  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:24.370204  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:24.370229  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:24.370234  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:24.370238  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:24.373042  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:24.373063  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:24.373069  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:24.373072  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:24.373075  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:24 GMT
	I0526 21:24:24.373079  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:24.373082  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:24.373747  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:24.374026  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:24.374037  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:24.374042  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:24.374045  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:24.376408  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:24.376425  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:24.376431  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:24.376436  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:24.376439  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:24.376443  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:24 GMT
	I0526 21:24:24.376445  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:24.376997  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:24.869876  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:24.869900  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:24.869905  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:24.869909  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:24.898018  527485 round_trippers.go:448] Response Status: 200 OK in 28 milliseconds
	I0526 21:24:24.898034  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:24.898038  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:24.898041  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:24.898045  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:24.898047  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:24 GMT
	I0526 21:24:24.898050  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:24.898227  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:24.898647  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:24.898666  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:24.898673  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:24.898678  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:24.900754  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:24.900765  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:24.900768  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:24 GMT
	I0526 21:24:24.900771  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:24.900774  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:24.900777  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:24.900780  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:24.901160  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:24.901438  527485 pod_ready.go:102] pod "etcd-multinode-20210526212238-510955" in "kube-system" namespace has status "Ready":"False"
	I0526 21:24:25.370114  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:25.370133  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:25.370175  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:25.370185  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:25.372773  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:25.372787  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:25.372799  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:25.372803  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:25.372806  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:25.372810  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:25.372813  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:25 GMT
	I0526 21:24:25.373161  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:25.373561  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:25.373577  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:25.373583  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:25.373589  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:25.376172  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:25.376187  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:25.376192  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:25.376197  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:25.376201  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:25.376205  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:25.376214  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:25 GMT
	I0526 21:24:25.376616  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:25.870571  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:25.870619  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:25.870638  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:25.870660  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:25.873229  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:25.873242  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:25.873252  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:25.873257  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:25.873261  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:25.873266  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:25 GMT
	I0526 21:24:25.873271  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:25.873708  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:25.874079  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:25.874095  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:25.874101  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:25.874107  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:25.876566  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:25.876578  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:25.876583  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:25.876588  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:25 GMT
	I0526 21:24:25.876593  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:25.876597  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:25.876602  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:25.876750  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:26.369551  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:26.369598  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:26.369616  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:26.369633  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:26.372510  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:26.372526  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:26.372531  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:26.372536  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:26.372540  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:26.372554  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:26.372558  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:26 GMT
	I0526 21:24:26.373076  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:26.373730  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:26.373756  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:26.373762  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:26.373769  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:26.376269  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:26.376286  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:26.376294  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:26.376300  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:26.376304  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:26.376308  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:26.376313  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:26 GMT
	I0526 21:24:26.376652  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:26.870464  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:26.870490  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:26.870495  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:26.870499  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:26.873690  527485 round_trippers.go:448] Response Status: 200 OK in 3 milliseconds
	I0526 21:24:26.873705  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:26.873710  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:26.873715  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:26.873719  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:26.873723  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:26 GMT
	I0526 21:24:26.873728  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:26.873983  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:26.874344  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:26.874360  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:26.874367  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:26.874373  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:26.877178  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:26.877191  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:26.877196  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:26.877200  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:26.877205  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:26.877209  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:26.877214  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:26 GMT
	I0526 21:24:26.877922  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:27.369631  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:27.369656  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:27.369661  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:27.369665  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:27.372458  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:27.372475  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:27.372480  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:27.372485  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:27 GMT
	I0526 21:24:27.372489  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:27.372494  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:27.372497  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:27.373241  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:27.373508  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:27.373520  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:27.373524  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:27.373528  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:27.375119  527485 round_trippers.go:448] Response Status: 200 OK in 1 milliseconds
	I0526 21:24:27.375133  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:27.375137  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:27.375140  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:27.375143  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:27.375146  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:27.375149  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:27 GMT
	I0526 21:24:27.375842  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:27.376141  527485 pod_ready.go:102] pod "etcd-multinode-20210526212238-510955" in "kube-system" namespace has status "Ready":"False"
	I0526 21:24:27.869471  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:27.869489  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:27.869495  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:27.869499  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:27.873711  527485 round_trippers.go:448] Response Status: 200 OK in 4 milliseconds
	I0526 21:24:27.873729  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:27.873734  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:27.873739  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:27.873743  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:27.873747  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:27.873752  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:27 GMT
	I0526 21:24:27.874308  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:27.874659  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:27.874677  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:27.874684  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:27.874691  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:27.876698  527485 round_trippers.go:448] Response Status: 200 OK in 1 milliseconds
	I0526 21:24:27.876711  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:27.876715  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:27.876719  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:27.876722  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:27.876726  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:27 GMT
	I0526 21:24:27.876732  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:27.877258  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:28.370153  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:28.370175  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:28.370180  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:28.370184  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:28.372318  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:28.372335  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:28.372340  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:28.372345  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:28.372349  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:28.372353  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:28.372358  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:28 GMT
	I0526 21:24:28.372898  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:28.373242  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:28.373257  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:28.373263  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:28.373269  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:28.375852  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:28.375867  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:28.375872  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:28.375877  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:28.375881  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:28.375885  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:28.375889  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:28 GMT
	I0526 21:24:28.376172  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:28.870331  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:28.870351  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:28.870357  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:28.870362  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:28.873052  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:28.873069  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:28.873074  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:28.873078  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:28.873082  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:28.873086  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:28.873090  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:28 GMT
	I0526 21:24:28.873249  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:28.873500  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:28.873512  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:28.873517  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:28.873520  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:28.875519  527485 round_trippers.go:448] Response Status: 200 OK in 1 milliseconds
	I0526 21:24:28.875532  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:28.875536  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:28.875540  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:28 GMT
	I0526 21:24:28.875544  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:28.875549  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:28.875553  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:28.876082  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:29.369874  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:29.369893  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:29.369897  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:29.369901  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:29.374023  527485 round_trippers.go:448] Response Status: 200 OK in 4 milliseconds
	I0526 21:24:29.374033  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:29.374036  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:29 GMT
	I0526 21:24:29.374044  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:29.374049  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:29.374055  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:29.374059  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:29.374472  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:29.374716  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:29.374727  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:29.374731  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:29.374735  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:29.377890  527485 round_trippers.go:448] Response Status: 200 OK in 3 milliseconds
	I0526 21:24:29.377907  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:29.377912  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:29.377916  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:29.377920  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:29.377925  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:29.377930  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:29 GMT
	I0526 21:24:29.378108  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:29.378322  527485 pod_ready.go:102] pod "etcd-multinode-20210526212238-510955" in "kube-system" namespace has status "Ready":"False"
	I0526 21:24:29.870116  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:29.870167  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:29.870188  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:29.870207  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:29.873144  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:29.873160  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:29.873164  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:29.873167  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:29.873170  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:29 GMT
	I0526 21:24:29.873174  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:29.873179  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:29.873318  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:29.873662  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:29.873679  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:29.873687  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:29.873694  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:29.875723  527485 round_trippers.go:448] Response Status: 200 OK in 1 milliseconds
	I0526 21:24:29.875737  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:29.875743  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:29.875747  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:29.875752  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:29 GMT
	I0526 21:24:29.875756  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:29.875761  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:29.875865  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:30.369596  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:30.369649  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:30.369662  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:30.369672  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:30.372203  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:30.372219  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:30.372225  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:30.372231  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:30.372236  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:30 GMT
	I0526 21:24:30.372241  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:30.372246  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:30.372669  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:30.372942  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:30.372957  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:30.372963  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:30.372967  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:30.375226  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:30.375240  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:30.375245  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:30.375250  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:30.375254  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:30.375259  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:30 GMT
	I0526 21:24:30.375264  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:30.375588  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:30.870433  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:30.870461  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:30.870466  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:30.870469  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:30.872968  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:30.872982  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:30.872986  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:30.872990  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:30.872993  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:30.872995  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:30.873004  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:30 GMT
	I0526 21:24:30.873522  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:30.873819  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:30.873835  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:30.873843  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:30.873848  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:30.876515  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:30.876530  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:30.876535  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:30.876540  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:30.876545  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:30.876549  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:30.876554  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:30 GMT
	I0526 21:24:30.876853  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:31.369531  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:31.369549  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:31.369555  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:31.369559  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:31.372512  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:31.372526  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:31.372531  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:31.372536  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:31.372540  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:31 GMT
	I0526 21:24:31.372543  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:31.372546  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:31.372908  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:31.373252  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:31.373281  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:31.373289  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:31.373295  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:31.375719  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:31.375730  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:31.375734  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:31.375737  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:31.375740  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:31.375742  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:31.375745  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:31 GMT
	I0526 21:24:31.375973  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:31.869861  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:31.869883  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:31.869888  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:31.869893  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:31.872682  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:31.872698  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:31.872702  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:31.872706  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:31.872711  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:31.872716  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:31 GMT
	I0526 21:24:31.872720  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:31.872902  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:31.873170  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:31.873185  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:31.873191  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:31.873198  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:31.876043  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:31.876053  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:31.876056  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:31.876059  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:31.876062  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:31 GMT
	I0526 21:24:31.876065  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:31.876068  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:31.876544  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:31.876777  527485 pod_ready.go:102] pod "etcd-multinode-20210526212238-510955" in "kube-system" namespace has status "Ready":"False"
	I0526 21:24:32.370395  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:32.370421  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:32.370430  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:32.370436  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:32.372735  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:32.372748  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:32.372751  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:32.372755  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:32.372758  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:32.372761  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:32.372764  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:32 GMT
	I0526 21:24:32.372983  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:32.373214  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:32.373226  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:32.373231  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:32.373234  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:32.375725  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:32.375736  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:32.375740  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:32.375743  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:32.375746  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:32.375749  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:32.375752  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:32 GMT
	I0526 21:24:32.376038  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:32.869828  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:32.869857  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:32.869864  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:32.869870  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:32.873185  527485 round_trippers.go:448] Response Status: 200 OK in 3 milliseconds
	I0526 21:24:32.873204  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:32.873212  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:32.873217  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:32.873221  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:32.873226  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:32.873230  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:32 GMT
	I0526 21:24:32.873380  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:32.873698  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:32.873712  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:32.873717  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:32.873720  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:32.876823  527485 round_trippers.go:448] Response Status: 200 OK in 3 milliseconds
	I0526 21:24:32.876836  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:32.876841  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:32 GMT
	I0526 21:24:32.876846  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:32.876850  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:32.876857  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:32.876876  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:32.876995  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:33.370489  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:33.370512  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:33.370517  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:33.370522  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:33.373572  527485 round_trippers.go:448] Response Status: 200 OK in 3 milliseconds
	I0526 21:24:33.373589  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:33.373595  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:33.373600  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:33.373606  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:33.373610  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:33.373615  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:33 GMT
	I0526 21:24:33.374176  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:33.374471  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:33.374487  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:33.374494  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:33.374501  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:33.376562  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:33.376575  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:33.376579  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:33.376582  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:33.376587  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:33 GMT
	I0526 21:24:33.376590  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:33.376596  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:33.376749  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:33.869862  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:33.869900  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:33.869916  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:33.869927  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:33.872662  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:33.872678  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:33.872683  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:33.872688  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:33.872692  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:33 GMT
	I0526 21:24:33.872696  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:33.872700  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:33.873107  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:33.873440  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:33.873457  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:33.873464  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:33.873469  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:33.875563  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:33.875579  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:33.875585  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:33.875589  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:33.875594  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:33.875598  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:33.875602  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:33 GMT
	I0526 21:24:33.876177  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:34.370143  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:34.370164  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:34.370169  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:34.370173  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:34.373134  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:34.373150  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:34.373154  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:34.373158  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:34 GMT
	I0526 21:24:34.373161  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:34.373164  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:34.373166  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:34.373651  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:34.373911  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:34.373942  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:34.373947  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:34.373950  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:34.376299  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:34.376311  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:34.376314  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:34.376317  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:34.376323  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:34.376326  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:34 GMT
	I0526 21:24:34.376329  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:34.376840  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:34.377080  527485 pod_ready.go:102] pod "etcd-multinode-20210526212238-510955" in "kube-system" namespace has status "Ready":"False"
	I0526 21:24:34.869642  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:34.869684  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:34.869698  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:34.869709  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:34.872541  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:34.872554  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:34.872559  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:34 GMT
	I0526 21:24:34.872564  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:34.872568  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:34.872572  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:34.872575  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:34.873173  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:34.873439  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:34.873450  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:34.873455  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:34.873458  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:34.875815  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:34.875829  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:34.875835  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:34.875839  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:34.875842  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:34.875845  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:34 GMT
	I0526 21:24:34.875848  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:34.876202  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:35.370126  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:35.370145  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:35.370150  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:35.370154  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:35.372099  527485 round_trippers.go:448] Response Status: 200 OK in 1 milliseconds
	I0526 21:24:35.372110  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:35.372113  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:35.372117  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:35 GMT
	I0526 21:24:35.372119  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:35.372123  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:35.372125  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:35.372267  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:35.372529  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:35.372540  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:35.372546  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:35.372550  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:35.374169  527485 round_trippers.go:448] Response Status: 200 OK in 1 milliseconds
	I0526 21:24:35.374185  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:35.374189  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:35.374192  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:35.374195  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:35.374199  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:35.374201  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:35 GMT
	I0526 21:24:35.374360  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:35.870255  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:35.870276  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:35.870281  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:35.870285  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:35.872694  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:35.872708  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:35.872711  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:35.872715  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:35.872718  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:35 GMT
	I0526 21:24:35.872721  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:35.872726  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:35.872951  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:35.873202  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:35.873214  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:35.873218  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:35.873222  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:35.875245  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:35.875258  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:35.875262  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:35.875265  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:35 GMT
	I0526 21:24:35.875270  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:35.875273  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:35.875276  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:35.875589  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:36.370425  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:36.370443  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:36.370448  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:36.370452  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:36.373415  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:36.373431  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:36.373436  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:36.373441  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:36.373445  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:36.373449  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:36.373456  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:36 GMT
	I0526 21:24:36.373763  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:36.374086  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:36.374101  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:36.374106  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:36.374110  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:36.376742  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:36.376756  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:36.376762  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:36.376767  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:36.376771  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:36.376774  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:36 GMT
	I0526 21:24:36.376779  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:36.377046  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:36.377377  527485 pod_ready.go:102] pod "etcd-multinode-20210526212238-510955" in "kube-system" namespace has status "Ready":"False"
	I0526 21:24:36.870042  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:36.870089  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:36.870115  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:36.870121  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:36.872966  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:36.872986  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:36.872992  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:36.872997  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:36.873003  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:36.873012  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:36 GMT
	I0526 21:24:36.873017  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:36.873194  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:36.873497  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:36.873511  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:36.873516  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:36.873521  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:36.876710  527485 round_trippers.go:448] Response Status: 200 OK in 3 milliseconds
	I0526 21:24:36.876723  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:36.876728  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:36.876733  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:36.876737  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:36.876742  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:36.876746  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:36 GMT
	I0526 21:24:36.876888  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:37.369587  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:37.369633  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:37.369646  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:37.369657  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:37.372597  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:37.372617  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:37.372623  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:37.372628  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:37.372632  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:37.372636  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:37.372640  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:37 GMT
	I0526 21:24:37.373174  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:37.373512  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:37.373527  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:37.373532  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:37.373536  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:37.375706  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:37.375718  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:37.375723  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:37.375728  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:37 GMT
	I0526 21:24:37.375732  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:37.375736  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:37.375744  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:37.375897  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:37.869633  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:37.869671  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:37.869684  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:37.869695  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:37.872187  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:37.872200  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:37.872206  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:37.872211  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:37.872216  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:37.872220  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:37.872225  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:37 GMT
	I0526 21:24:37.872350  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:37.872620  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:37.872634  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:37.872640  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:37.872646  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:37.874420  527485 round_trippers.go:448] Response Status: 200 OK in 1 milliseconds
	I0526 21:24:37.874439  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:37.874443  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:37.874446  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:37.874449  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:37 GMT
	I0526 21:24:37.874452  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:37.874455  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:37.874529  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:38.370303  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:38.370323  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:38.370331  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:38.370337  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:38.373247  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:38.373268  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:38.373272  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:38.373276  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:38.373280  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:38.373285  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:38.373290  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:38 GMT
	I0526 21:24:38.374009  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:38.374298  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:38.374311  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:38.374315  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:38.374319  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:38.377694  527485 round_trippers.go:448] Response Status: 200 OK in 3 milliseconds
	I0526 21:24:38.377708  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:38.377712  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:38.377716  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:38.377718  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:38.377721  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:38.377724  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:38 GMT
	I0526 21:24:38.378369  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:38.378614  527485 pod_ready.go:102] pod "etcd-multinode-20210526212238-510955" in "kube-system" namespace has status "Ready":"False"
	I0526 21:24:38.869685  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:38.869736  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:38.869757  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:38.869795  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:38.872326  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:38.872346  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:38.872352  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:38.872359  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:38.872363  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:38.872368  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:38 GMT
	I0526 21:24:38.872373  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:38.872506  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:38.872761  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:38.872773  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:38.872777  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:38.872781  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:38.874795  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:38.874807  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:38.874810  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:38.874813  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:38.874816  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:38.874819  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:38.874822  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:38 GMT
	I0526 21:24:38.875023  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:39.370206  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:39.370223  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:39.370228  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:39.370232  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:39.373435  527485 round_trippers.go:448] Response Status: 200 OK in 3 milliseconds
	I0526 21:24:39.373456  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:39.373463  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:39.373469  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:39.373474  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:39.373480  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:39 GMT
	I0526 21:24:39.373486  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:39.373968  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:39.374284  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:39.374302  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:39.374310  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:39.374319  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:39.379949  527485 round_trippers.go:448] Response Status: 200 OK in 5 milliseconds
	I0526 21:24:39.379964  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:39.379969  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:39 GMT
	I0526 21:24:39.379973  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:39.379980  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:39.379985  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:39.379990  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:39.380108  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:39.869983  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:39.870006  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:39.870013  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:39.870019  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:39.872924  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:39.872940  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:39.872946  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:39.872950  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:39.872955  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:39.872959  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:39.872963  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:39 GMT
	I0526 21:24:39.873177  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:39.873518  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:39.873535  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:39.873539  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:39.873544  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:39.875319  527485 round_trippers.go:448] Response Status: 200 OK in 1 milliseconds
	I0526 21:24:39.875330  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:39.875335  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:39.875339  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:39.875343  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:39.875348  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:39.875352  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:39 GMT
	I0526 21:24:39.875704  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:40.370463  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:40.370482  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:40.370486  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:40.370490  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:40.373544  527485 round_trippers.go:448] Response Status: 200 OK in 3 milliseconds
	I0526 21:24:40.373564  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:40.373569  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:40 GMT
	I0526 21:24:40.373574  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:40.373578  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:40.373582  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:40.373587  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:40.373943  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:40.374194  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:40.374207  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:40.374213  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:40.374218  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:40.376513  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:40.376526  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:40.376529  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:40 GMT
	I0526 21:24:40.376532  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:40.376535  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:40.376541  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:40.376544  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:40.376846  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:40.869472  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:40.869498  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:40.869506  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:40.869511  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:40.871856  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:40.871872  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:40.871877  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:40.871880  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:40.871883  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:40.871887  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:40 GMT
	I0526 21:24:40.871890  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:40.872056  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:40.872437  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:40.872458  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:40.872466  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:40.872472  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:40.874785  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:40.874800  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:40.874803  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:40.874806  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:40.874809  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:40.874812  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:40.874815  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:40 GMT
	I0526 21:24:40.875043  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:40.875292  527485 pod_ready.go:102] pod "etcd-multinode-20210526212238-510955" in "kube-system" namespace has status "Ready":"False"
	I0526 21:24:41.369964  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:41.369983  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:41.369988  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:41.369992  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:41.372469  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:41.372483  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:41.372487  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:41 GMT
	I0526 21:24:41.372490  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:41.372493  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:41.372496  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:41.372499  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:41.372974  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:41.373266  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:41.373284  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:41.373291  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:41.373297  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:41.375385  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:41.375397  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:41.375402  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:41.375407  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:41.375412  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:41.375417  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:41 GMT
	I0526 21:24:41.375421  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:41.375772  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:41.869535  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:41.869575  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:41.869587  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:41.869597  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:41.872438  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:41.872458  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:41.872464  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:41.872469  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:41.872474  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:41.872478  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:41.872481  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:41 GMT
	I0526 21:24:41.872804  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:41.873066  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:41.873078  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:41.873083  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:41.873087  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:41.875071  527485 round_trippers.go:448] Response Status: 200 OK in 1 milliseconds
	I0526 21:24:41.875083  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:41.875087  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:41.875090  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:41.875093  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:41.875096  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:41.875100  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:41 GMT
	I0526 21:24:41.875215  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:42.370081  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:42.370099  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:42.370104  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:42.370109  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:42.372230  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:42.372246  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:42.372251  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:42.372256  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:42.372260  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:42.372265  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:42.372271  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:42 GMT
	I0526 21:24:42.372403  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:42.372734  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:42.372748  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:42.372753  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:42.372759  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:42.375284  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:42.375294  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:42.375299  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:42.375303  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:42.375307  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:42.375312  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:42.375316  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:42 GMT
	I0526 21:24:42.375554  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:42.870439  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:42.870468  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:42.870475  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:42.870482  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:42.873403  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:42.873426  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:42.873432  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:42.873436  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:42.873439  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:42.873442  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:42 GMT
	I0526 21:24:42.873445  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:42.873520  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:42.873807  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:42.873818  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:42.873824  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:42.873828  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:42.877344  527485 round_trippers.go:448] Response Status: 200 OK in 3 milliseconds
	I0526 21:24:42.877361  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:42.877365  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:42.877369  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:42.877374  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:42.877378  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:42.877383  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:42 GMT
	I0526 21:24:42.877614  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:42.877849  527485 pod_ready.go:102] pod "etcd-multinode-20210526212238-510955" in "kube-system" namespace has status "Ready":"False"
	I0526 21:24:43.370448  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:43.370472  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:43.370477  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:43.370483  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:43.373051  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:43.373070  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:43.373076  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:43.373082  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:43.373087  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:43 GMT
	I0526 21:24:43.373092  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:43.373097  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:43.373240  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:43.373530  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:43.373542  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:43.373547  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:43.373551  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:43.376311  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:43.376325  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:43.376329  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:43.376332  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:43.376335  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:43.376338  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:43.376341  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:43 GMT
	I0526 21:24:43.377168  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:43.870182  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:43.870199  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:43.870204  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:43.870208  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:43.872727  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:43.872743  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:43.872748  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:43.872753  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:43.872757  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:43.872762  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:43 GMT
	I0526 21:24:43.872766  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:43.873209  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:43.873569  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:43.873587  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:43.873593  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:43.873599  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:43.875540  527485 round_trippers.go:448] Response Status: 200 OK in 1 milliseconds
	I0526 21:24:43.875562  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:43.875570  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:43.875577  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:43.875583  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:43 GMT
	I0526 21:24:43.875591  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:43.875597  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:43.876010  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:44.370170  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:44.370194  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:44.370204  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:44.370213  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:44.372274  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:44.372295  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:44.372301  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:44.372307  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:44.372312  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:44.372327  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:44 GMT
	I0526 21:24:44.372332  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:44.372557  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:44.372950  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:44.372971  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:44.372978  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:44.372985  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:44.375148  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:44.375161  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:44.375165  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:44.375169  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:44.375171  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:44.375175  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:44.375179  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:44 GMT
	I0526 21:24:44.375436  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:44.870310  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:44.870328  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:44.870333  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:44.870337  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:44.872765  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:44.872786  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:44.872790  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:44.872794  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:44.872796  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:44.872804  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:44.872811  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:44 GMT
	I0526 21:24:44.873273  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:44.873564  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:44.873578  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:44.873582  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:44.873586  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:44.875549  527485 round_trippers.go:448] Response Status: 200 OK in 1 milliseconds
	I0526 21:24:44.875568  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:44.875573  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:44.875578  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:44.875582  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:44.875586  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:44 GMT
	I0526 21:24:44.875590  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:44.875772  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:45.369580  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:45.369630  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:45.369648  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:45.369659  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:45.371522  527485 round_trippers.go:448] Response Status: 200 OK in 1 milliseconds
	I0526 21:24:45.371539  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:45.371545  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:45.371549  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:45.371554  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:45.371558  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:45.371562  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:45 GMT
	I0526 21:24:45.372100  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:45.372407  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:45.372421  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:45.372426  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:45.372430  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:45.374935  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:45.374946  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:45.374950  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:45.374953  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:45 GMT
	I0526 21:24:45.374956  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:45.374959  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:45.374964  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:45.375188  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:45.375469  527485 pod_ready.go:102] pod "etcd-multinode-20210526212238-510955" in "kube-system" namespace has status "Ready":"False"
	I0526 21:24:45.870034  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:45.870070  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:45.870083  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:45.870093  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:45.873158  527485 round_trippers.go:448] Response Status: 200 OK in 3 milliseconds
	I0526 21:24:45.873174  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:45.873178  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:45.873181  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:45.873185  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:45 GMT
	I0526 21:24:45.873187  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:45.873190  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:45.873812  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:45.874126  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:45.874148  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:45.874155  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:45.874162  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:45.877154  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:45.877165  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:45.877169  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:45.877172  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:45.877175  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:45.877178  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:45.877181  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:45 GMT
	I0526 21:24:45.878219  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:46.370167  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:46.370190  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:46.370195  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:46.370199  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:46.372766  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:46.372782  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:46.372787  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:46.372791  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:46.372795  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:46.372799  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:46.372804  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:46 GMT
	I0526 21:24:46.373263  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:46.373622  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:46.373640  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:46.373647  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:46.373652  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:46.375756  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:46.375775  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:46.375781  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:46.375787  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:46.375792  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:46.375799  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:46 GMT
	I0526 21:24:46.375804  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:46.376142  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:46.870043  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:46.870070  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:46.870077  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:46.870083  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:46.873256  527485 round_trippers.go:448] Response Status: 200 OK in 3 milliseconds
	I0526 21:24:46.873272  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:46.873276  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:46.873280  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:46 GMT
	I0526 21:24:46.873284  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:46.873292  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:46.873296  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:46.873597  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:46.873976  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:46.873994  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:46.874000  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:46.874015  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:46.876444  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:46.876463  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:46.876469  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:46.876475  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:46.876480  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:46 GMT
	I0526 21:24:46.876487  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:46.876493  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:46.877064  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:47.369945  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:47.369968  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:47.369974  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:47.369978  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:47.373201  527485 round_trippers.go:448] Response Status: 200 OK in 3 milliseconds
	I0526 21:24:47.373226  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:47.373232  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:47 GMT
	I0526 21:24:47.373238  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:47.373243  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:47.373251  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:47.373256  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:47.373422  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:47.373726  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:47.373739  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:47.373744  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:47.373749  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:47.375980  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:47.375990  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:47.375994  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:47 GMT
	I0526 21:24:47.375997  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:47.376003  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:47.376011  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:47.376015  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:47.376269  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:47.376559  527485 pod_ready.go:102] pod "etcd-multinode-20210526212238-510955" in "kube-system" namespace has status "Ready":"False"
	I0526 21:24:47.870217  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:47.870237  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:47.870243  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:47.870247  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:47.872660  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:47.872674  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:47.872678  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:47.872681  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:47.872684  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:47.872687  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:47.872690  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:47 GMT
	I0526 21:24:47.873083  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:47.873358  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:47.873371  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:47.873376  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:47.873380  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:47.875100  527485 round_trippers.go:448] Response Status: 200 OK in 1 milliseconds
	I0526 21:24:47.875116  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:47.875121  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:47.875126  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:47.875130  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:47.875134  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:47.875138  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:47 GMT
	I0526 21:24:47.875353  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:48.370168  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:48.370187  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:48.370192  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:48.370196  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:48.372506  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:48.372523  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:48.372528  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:48.372533  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:48.372537  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:48 GMT
	I0526 21:24:48.372541  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:48.372546  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:48.373050  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:48.373304  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:48.373316  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:48.373321  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:48.373325  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:48.375682  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:48.375700  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:48.375711  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:48 GMT
	I0526 21:24:48.375716  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:48.375722  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:48.375727  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:48.375735  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:48.376241  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:48.870296  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:48.870316  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:48.870321  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:48.870325  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:48.872705  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:48.872723  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:48.872728  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:48.872733  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:48.872738  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:48 GMT
	I0526 21:24:48.872742  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:48.872746  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:48.873355  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:48.873628  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:48.873641  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:48.873645  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:48.873649  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:48.875918  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:48.875933  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:48.875937  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:48.875940  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:48 GMT
	I0526 21:24:48.875943  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:48.875947  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:48.875950  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:48.876202  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:49.370589  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:49.370632  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:49.370646  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:49.370656  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:49.373143  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:49.373157  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:49.373161  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:49.373164  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:49 GMT
	I0526 21:24:49.373167  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:49.373175  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:49.373183  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:49.373581  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:49.373924  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:49.373942  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:49.373946  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:49.373950  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:49.376597  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:49.376614  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:49.376619  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:49.376624  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:49.376628  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:49 GMT
	I0526 21:24:49.376632  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:49.376636  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:49.376819  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:49.377139  527485 pod_ready.go:102] pod "etcd-multinode-20210526212238-510955" in "kube-system" namespace has status "Ready":"False"
	I0526 21:24:49.869582  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:49.869619  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:49.869631  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:49.869641  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:49.871788  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:49.871805  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:49.871810  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:49.871815  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:49.871818  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:49.871823  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:49.871827  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:49 GMT
	I0526 21:24:49.872253  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:49.872569  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:49.872585  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:49.872590  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:49.872594  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:49.874676  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:49.874693  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:49.874699  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:49.874765  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:49.874784  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:49.874789  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:49.874794  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:49 GMT
	I0526 21:24:49.874972  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:50.369724  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:50.369759  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:50.369796  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:50.369819  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:50.372326  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:50.372343  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:50.372349  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:50.372354  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:50.372358  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:50 GMT
	I0526 21:24:50.372364  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:50.372368  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:50.372818  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:50.373172  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:50.373190  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:50.373197  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:50.373203  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:50.375319  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:50.375334  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:50.375338  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:50.375341  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:50.375344  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:50 GMT
	I0526 21:24:50.375347  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:50.375350  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:50.375843  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:50.869656  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:50.869676  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:50.869682  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:50.869686  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:50.872107  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:50.872124  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:50.872128  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:50.872131  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:50.872134  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:50 GMT
	I0526 21:24:50.872140  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:50.872143  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:50.872530  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:50.872806  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:50.872817  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:50.872822  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:50.872826  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:50.876568  527485 round_trippers.go:448] Response Status: 200 OK in 3 milliseconds
	I0526 21:24:50.876583  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:50.876588  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:50 GMT
	I0526 21:24:50.876593  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:50.876597  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:50.876601  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:50.876605  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:50.876759  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:51.369513  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:51.369551  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:51.369564  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:51.369575  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:51.372619  527485 round_trippers.go:448] Response Status: 200 OK in 3 milliseconds
	I0526 21:24:51.372633  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:51.372637  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:51.372640  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:51.372643  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:51 GMT
	I0526 21:24:51.372646  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:51.372649  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:51.373265  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:51.373536  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:51.373548  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:51.373553  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:51.373557  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:51.375420  527485 round_trippers.go:448] Response Status: 200 OK in 1 milliseconds
	I0526 21:24:51.375435  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:51.375440  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:51.375445  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:51.375449  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:51.375456  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:51.375461  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:51 GMT
	I0526 21:24:51.375981  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:51.869622  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:51.869660  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:51.869672  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:51.869683  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:51.873134  527485 round_trippers.go:448] Response Status: 200 OK in 3 milliseconds
	I0526 21:24:51.873150  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:51.873155  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:51.873160  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:51.873165  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:51.873170  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:51.873174  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:51 GMT
	I0526 21:24:51.873380  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:51.873674  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:51.873689  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:51.873693  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:51.873697  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:51.875957  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:51.875972  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:51.875977  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:51.875981  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:51.875989  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:51.875995  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:51.876000  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:51 GMT
	I0526 21:24:51.876538  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:51.876815  527485 pod_ready.go:102] pod "etcd-multinode-20210526212238-510955" in "kube-system" namespace has status "Ready":"False"
	I0526 21:24:52.369840  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:52.369869  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:52.369875  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:52.369879  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:52.374720  527485 round_trippers.go:448] Response Status: 200 OK in 4 milliseconds
	I0526 21:24:52.374742  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:52.374748  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:52.374752  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:52.374757  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:52 GMT
	I0526 21:24:52.374760  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:52.374764  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:52.375122  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:52.375478  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:52.375493  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:52.375498  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:52.375502  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:52.378002  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:52.378019  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:52.378024  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:52.378030  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:52.378034  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:52.378039  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:52.378045  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:52 GMT
	I0526 21:24:52.378319  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:52.870288  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:52.870324  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:52.870331  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:52.870335  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:52.873645  527485 round_trippers.go:448] Response Status: 200 OK in 3 milliseconds
	I0526 21:24:52.873663  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:52.873667  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:52.873671  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:52 GMT
	I0526 21:24:52.873674  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:52.873677  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:52.873686  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:52.873994  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:52.874295  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:52.874307  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:52.874311  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:52.874315  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:52.876474  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:52.876487  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:52.876491  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:52.876494  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:52.876497  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:52.876500  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:52.876503  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:52 GMT
	I0526 21:24:52.877108  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:53.370107  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:53.370133  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:53.370138  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:53.370142  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:53.374368  527485 round_trippers.go:448] Response Status: 200 OK in 4 milliseconds
	I0526 21:24:53.374384  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:53.374389  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:53 GMT
	I0526 21:24:53.374393  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:53.374398  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:53.374403  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:53.374407  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:53.374670  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:53.375074  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:53.375095  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:53.375103  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:53.375109  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:53.378060  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:53.378072  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:53.378075  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:53.378079  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:53.378086  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:53.378090  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:53 GMT
	I0526 21:24:53.378094  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:53.378417  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:53.869515  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:53.869551  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:53.869556  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:53.869560  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:53.873301  527485 round_trippers.go:448] Response Status: 200 OK in 3 milliseconds
	I0526 21:24:53.873322  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:53.873326  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:53.873329  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:53.873332  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:53 GMT
	I0526 21:24:53.873341  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:53.873346  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:53.873912  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:53.874268  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:53.874288  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:53.874295  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:53.874303  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:53.877375  527485 round_trippers.go:448] Response Status: 200 OK in 3 milliseconds
	I0526 21:24:53.877389  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:53.877396  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:53.877401  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:53.877406  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:53.877410  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:53.877416  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:53 GMT
	I0526 21:24:53.877760  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:53.878008  527485 pod_ready.go:102] pod "etcd-multinode-20210526212238-510955" in "kube-system" namespace has status "Ready":"False"
	I0526 21:24:54.369919  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:54.369943  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:54.369948  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:54.369952  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:54.372996  527485 round_trippers.go:448] Response Status: 200 OK in 3 milliseconds
	I0526 21:24:54.373013  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:54.373017  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:54.373020  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:54.373023  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:54.373026  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:54.373029  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:54 GMT
	I0526 21:24:54.374031  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:54.374332  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:54.374343  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:54.374347  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:54.374351  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:54.377425  527485 round_trippers.go:448] Response Status: 200 OK in 3 milliseconds
	I0526 21:24:54.377442  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:54.377448  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:54.377453  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:54.377458  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:54.377462  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:54 GMT
	I0526 21:24:54.377466  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:54.378623  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:54.869560  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:54.869607  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:54.869621  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:54.869631  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:54.872295  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:54.872314  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:54.872318  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:54.872321  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:54.872324  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:54.872327  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:54.872330  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:54 GMT
	I0526 21:24:54.872461  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:54.872735  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:54.872746  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:54.872752  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:54.872757  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:54.875249  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:54.875268  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:54.875273  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:54.875277  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:54.875286  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:54.875289  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:54.875293  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:54 GMT
	I0526 21:24:54.875525  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:55.370352  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:55.370376  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:55.370381  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:55.370385  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:55.373460  527485 round_trippers.go:448] Response Status: 200 OK in 3 milliseconds
	I0526 21:24:55.373478  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:55.373483  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:55.373488  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:55.373492  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:55.373497  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:55.373501  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:55 GMT
	I0526 21:24:55.374254  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:55.374605  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:55.374620  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:55.374627  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:55.374631  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:55.376626  527485 round_trippers.go:448] Response Status: 200 OK in 1 milliseconds
	I0526 21:24:55.376640  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:55.376644  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:55.376648  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:55.376651  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:55.376654  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:55.376657  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:55 GMT
	I0526 21:24:55.376786  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:55.869605  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:55.869648  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:55.869660  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:55.869671  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:55.872649  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:55.872666  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:55.872671  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:55.872675  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:55.872680  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:55.872685  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:55.872688  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:55 GMT
	I0526 21:24:55.873379  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:55.873738  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:55.873752  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:55.873757  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:55.873761  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:55.875854  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:55.875871  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:55.875876  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:55.875881  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:55 GMT
	I0526 21:24:55.875885  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:55.875889  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:55.875893  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:55.876183  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:56.370096  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:56.370113  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:56.370118  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:56.370122  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:56.372239  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:56.372257  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:56.372262  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:56.372267  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:56.372272  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:56.372276  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:56 GMT
	I0526 21:24:56.372281  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:56.372389  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:56.372679  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:56.372692  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:56.372696  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:56.372700  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:56.374500  527485 round_trippers.go:448] Response Status: 200 OK in 1 milliseconds
	I0526 21:24:56.374512  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:56.374517  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:56.374521  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:56.374525  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:56.374530  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:56.374534  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:56 GMT
	I0526 21:24:56.374863  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:56.375148  527485 pod_ready.go:102] pod "etcd-multinode-20210526212238-510955" in "kube-system" namespace has status "Ready":"False"
	I0526 21:24:56.869657  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:56.869706  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:56.869720  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:56.869731  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:56.872516  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:56.872534  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:56.872539  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:56.872544  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:56.872548  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:56.872552  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:56.872557  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:56 GMT
	I0526 21:24:56.873735  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:56.874028  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:56.874040  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:56.874045  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:56.874049  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:56.876404  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:56.876422  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:56.876429  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:56 GMT
	I0526 21:24:56.876435  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:56.876440  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:56.876445  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:56.876448  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:56.876737  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:57.369571  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:57.369620  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:57.369639  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:57.369656  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:57.372850  527485 round_trippers.go:448] Response Status: 200 OK in 3 milliseconds
	I0526 21:24:57.373234  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:57.373251  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:57.373254  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:57.373257  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:57.373261  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:57.373264  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:57 GMT
	I0526 21:24:57.373352  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:57.373631  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:57.373643  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:57.373648  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:57.373651  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:57.376340  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:57.376351  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:57.376354  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:57.376357  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:57.376360  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:57.376363  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:57.376366  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:57 GMT
	I0526 21:24:57.376700  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:57.869527  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:57.869565  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:57.869581  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:57.869592  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:57.872622  527485 round_trippers.go:448] Response Status: 200 OK in 3 milliseconds
	I0526 21:24:57.872636  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:57.872641  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:57.872646  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:57.872651  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:57 GMT
	I0526 21:24:57.872656  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:57.872661  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:57.872727  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:57.872996  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:57.873007  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:57.873012  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:57.873016  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:57.875048  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:57.875061  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:57.875065  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:57.875068  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:57.875071  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:57 GMT
	I0526 21:24:57.875074  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:57.875077  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:57.875468  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:58.370322  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:58.370341  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:58.370346  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:58.370350  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:58.372221  527485 round_trippers.go:448] Response Status: 200 OK in 1 milliseconds
	I0526 21:24:58.372235  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:58.372241  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:58.372244  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:58.372247  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:58.372251  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:58 GMT
	I0526 21:24:58.372255  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:58.372549  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:58.372894  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:58.372909  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:58.372914  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:58.372919  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:58.374787  527485 round_trippers.go:448] Response Status: 200 OK in 1 milliseconds
	I0526 21:24:58.374802  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:58.374807  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:58.374812  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:58.374817  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:58.374821  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:58.374825  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:58 GMT
	I0526 21:24:58.374993  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:58.375292  527485 pod_ready.go:102] pod "etcd-multinode-20210526212238-510955" in "kube-system" namespace has status "Ready":"False"
	I0526 21:24:58.870234  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:58.870270  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:58.870284  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:58.870294  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:58.872849  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:58.872881  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:58.872888  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:58.872895  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:58.872901  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:58 GMT
	I0526 21:24:58.872906  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:58.872911  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:58.873569  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:58.873888  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:58.873904  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:58.873909  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:58.873913  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:58.876002  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:58.876017  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:58.876021  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:58.876024  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:58.876027  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:58.876030  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:58.876034  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:58 GMT
	I0526 21:24:58.876299  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:59.369490  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:59.369527  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:59.369542  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:59.369553  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:59.372897  527485 round_trippers.go:448] Response Status: 200 OK in 3 milliseconds
	I0526 21:24:59.372918  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:59.372925  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:59.372931  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:59.372937  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:59.372941  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:59.372947  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:59 GMT
	I0526 21:24:59.373414  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:59.373741  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:59.373759  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:59.373764  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:59.373768  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:59.375613  527485 round_trippers.go:448] Response Status: 200 OK in 1 milliseconds
	I0526 21:24:59.375625  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:59.375628  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:59.375632  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:59.375635  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:59.375638  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:59.375641  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:59 GMT
	I0526 21:24:59.376179  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:59.870070  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:59.870091  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:59.870097  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:59.870103  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:59.873105  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:59.873121  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:59.873126  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:59.873129  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:59.873132  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:59.873135  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:59.873139  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:59 GMT
	I0526 21:24:59.873512  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:59.873779  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:59.873793  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:59.873798  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:59.873802  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:59.876684  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:59.876699  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:59.876704  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:59.876710  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:59.876715  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:59.876721  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:59 GMT
	I0526 21:24:59.876726  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:59.877192  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:25:00.370068  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:25:00.370088  527485 round_trippers.go:429] Request Headers:
	I0526 21:25:00.370092  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:25:00.370096  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:25:00.372679  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:25:00.372695  527485 round_trippers.go:451] Response Headers:
	I0526 21:25:00.372699  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:25:00.372702  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:25:00.372705  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:25:00 GMT
	I0526 21:25:00.372708  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:25:00.372711  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:25:00.373201  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:25:00.373448  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:25:00.373459  527485 round_trippers.go:429] Request Headers:
	I0526 21:25:00.373463  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:25:00.373467  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:25:00.376419  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:25:00.376433  527485 round_trippers.go:451] Response Headers:
	I0526 21:25:00.376438  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:25:00 GMT
	I0526 21:25:00.376442  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:25:00.376446  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:25:00.376450  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:25:00.376454  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:25:00.376597  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:25:00.376885  527485 pod_ready.go:102] pod "etcd-multinode-20210526212238-510955" in "kube-system" namespace has status "Ready":"False"
	I0526 21:25:00.870428  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:25:00.870447  527485 round_trippers.go:429] Request Headers:
	I0526 21:25:00.870452  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:25:00.870456  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:25:00.872526  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:25:00.872544  527485 round_trippers.go:451] Response Headers:
	I0526 21:25:00.872549  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:25:00.872554  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:25:00.872558  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:25:00.872562  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:25:00.872567  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:25:00 GMT
	I0526 21:25:00.873108  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:25:00.873421  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:25:00.873435  527485 round_trippers.go:429] Request Headers:
	I0526 21:25:00.873440  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:25:00.873443  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:25:00.875303  527485 round_trippers.go:448] Response Status: 200 OK in 1 milliseconds
	I0526 21:25:00.875318  527485 round_trippers.go:451] Response Headers:
	I0526 21:25:00.875324  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:25:00.875328  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:25:00.875333  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:25:00.875337  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:25:00 GMT
	I0526 21:25:00.875344  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:25:00.875615  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:25:01.370455  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:25:01.370475  527485 round_trippers.go:429] Request Headers:
	I0526 21:25:01.370479  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:25:01.370483  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:25:01.372996  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:25:01.373010  527485 round_trippers.go:451] Response Headers:
	I0526 21:25:01.373019  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:25:01.373025  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:25:01.373029  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:25:01.373035  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:25:01 GMT
	I0526 21:25:01.373042  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:25:01.373283  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:25:01.373600  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:25:01.373613  527485 round_trippers.go:429] Request Headers:
	I0526 21:25:01.373618  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:25:01.373621  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:25:01.375308  527485 round_trippers.go:448] Response Status: 200 OK in 1 milliseconds
	I0526 21:25:01.375325  527485 round_trippers.go:451] Response Headers:
	I0526 21:25:01.375334  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:25:01.375338  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:25:01.375343  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:25:01.375347  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:25:01.375352  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:25:01 GMT
	I0526 21:25:01.375561  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:25:01.870334  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:25:01.870351  527485 round_trippers.go:429] Request Headers:
	I0526 21:25:01.870356  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:25:01.870360  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:25:01.874653  527485 round_trippers.go:448] Response Status: 200 OK in 4 milliseconds
	I0526 21:25:01.874674  527485 round_trippers.go:451] Response Headers:
	I0526 21:25:01.874682  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:25:01.874687  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:25:01.874691  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:25:01 GMT
	I0526 21:25:01.874695  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:25:01.874699  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:25:01.874967  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:25:01.875209  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:25:01.875220  527485 round_trippers.go:429] Request Headers:
	I0526 21:25:01.875224  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:25:01.875228  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:25:01.877718  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:25:01.877729  527485 round_trippers.go:451] Response Headers:
	I0526 21:25:01.877734  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:25:01.877739  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:25:01.877743  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:25:01.877748  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:25:01.877752  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:25:01 GMT
	I0526 21:25:01.878107  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:25:02.370099  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:25:02.370119  527485 round_trippers.go:429] Request Headers:
	I0526 21:25:02.370124  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:25:02.370129  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:25:02.372539  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:25:02.372556  527485 round_trippers.go:451] Response Headers:
	I0526 21:25:02.372561  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:25:02.372565  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:25:02.372570  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:25:02.372585  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:25:02.372589  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:25:02 GMT
	I0526 21:25:02.372677  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"539","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:25:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5642 chars]
	I0526 21:25:02.372945  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:25:02.372957  527485 round_trippers.go:429] Request Headers:
	I0526 21:25:02.372962  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:25:02.372965  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:25:02.374703  527485 round_trippers.go:448] Response Status: 200 OK in 1 milliseconds
	I0526 21:25:02.374719  527485 round_trippers.go:451] Response Headers:
	I0526 21:25:02.374724  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:25:02 GMT
	I0526 21:25:02.374729  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:25:02.374736  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:25:02.374739  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:25:02.374742  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:25:02.375182  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:25:02.375398  527485 pod_ready.go:92] pod "etcd-multinode-20210526212238-510955" in "kube-system" namespace has status "Ready":"True"
	I0526 21:25:02.375416  527485 pod_ready.go:81] duration metric: took 48.012392127s waiting for pod "etcd-multinode-20210526212238-510955" in "kube-system" namespace to be "Ready" ...
	I0526 21:25:02.375430  527485 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-20210526212238-510955" in "kube-system" namespace to be "Ready" ...
	I0526 21:25:02.375471  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-20210526212238-510955
	I0526 21:25:02.375481  527485 round_trippers.go:429] Request Headers:
	I0526 21:25:02.375487  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:25:02.375492  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:25:02.377329  527485 round_trippers.go:448] Response Status: 200 OK in 1 milliseconds
	I0526 21:25:02.377344  527485 round_trippers.go:451] Response Headers:
	I0526 21:25:02.377349  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:25:02.377353  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:25:02.377357  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:25:02.377361  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:25:02.377365  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:25:02 GMT
	I0526 21:25:02.377514  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-20210526212238-510955","namespace":"kube-system","uid":"5d446255-3487-4319-9b9f-2294a93fd226","resourceVersion":"447","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.229:8443","kubernetes.io/config.hash":"b42b6879229f245abab6047de8662a2f","kubernetes.io/config.mirror":"b42b6879229f245abab6047de8662a2f","kubernetes.io/config.seen":"2021-05-26T21:23:43.638984722Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:kubeadm.kubernetes.io/kube-apiserver.advertise-addr [truncated 7266 chars]
	I0526 21:25:02.377767  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:25:02.377780  527485 round_trippers.go:429] Request Headers:
	I0526 21:25:02.377786  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:25:02.377791  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:25:02.379941  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:25:02.379951  527485 round_trippers.go:451] Response Headers:
	I0526 21:25:02.379956  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:25:02.379960  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:25:02.379964  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:25:02.379968  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:25:02.379973  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:25:02 GMT
	I0526 21:25:02.380165  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:25:02.380384  527485 pod_ready.go:92] pod "kube-apiserver-multinode-20210526212238-510955" in "kube-system" namespace has status "Ready":"True"
	I0526 21:25:02.380396  527485 pod_ready.go:81] duration metric: took 4.954392ms waiting for pod "kube-apiserver-multinode-20210526212238-510955" in "kube-system" namespace to be "Ready" ...
	I0526 21:25:02.380405  527485 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-20210526212238-510955" in "kube-system" namespace to be "Ready" ...
	I0526 21:25:02.380442  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20210526212238-510955
	I0526 21:25:02.380450  527485 round_trippers.go:429] Request Headers:
	I0526 21:25:02.380454  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:25:02.380458  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:25:02.382393  527485 round_trippers.go:448] Response Status: 200 OK in 1 milliseconds
	I0526 21:25:02.382407  527485 round_trippers.go:451] Response Headers:
	I0526 21:25:02.382411  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:25:02.382416  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:25:02.382422  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:25:02.382426  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:25:02 GMT
	I0526 21:25:02.382432  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:25:02.382577  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-20210526212238-510955","namespace":"kube-system","uid":"ff663293-6f11-48e7-9409-1637114dc587","resourceVersion":"390","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"474c55dfb64741cc485e46b6bb9f2dc0","kubernetes.io/config.mirror":"474c55dfb64741cc485e46b6bb9f2dc0","kubernetes.io/config.seen":"2021-05-26T21:23:43.638987620Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con
fig.mirror":{},"f:kubernetes.io/config.seen":{},"f:kubernetes.io/config [truncated 7083 chars]
	I0526 21:25:02.382844  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:25:02.382858  527485 round_trippers.go:429] Request Headers:
	I0526 21:25:02.382864  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:25:02.382869  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:25:02.384623  527485 round_trippers.go:448] Response Status: 200 OK in 1 milliseconds
	I0526 21:25:02.384649  527485 round_trippers.go:451] Response Headers:
	I0526 21:25:02.384652  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:25:02.384655  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:25:02.384658  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:25:02.384661  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:25:02.384663  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:25:02 GMT
	I0526 21:25:02.385167  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:25:02.886040  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20210526212238-510955
	I0526 21:25:02.886066  527485 round_trippers.go:429] Request Headers:
	I0526 21:25:02.886070  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:25:02.886074  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:25:02.888279  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:25:02.888306  527485 round_trippers.go:451] Response Headers:
	I0526 21:25:02.888311  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:25:02.888316  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:25:02.888320  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:25:02.888324  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:25:02.888329  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:25:02 GMT
	I0526 21:25:02.888444  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-20210526212238-510955","namespace":"kube-system","uid":"ff663293-6f11-48e7-9409-1637114dc587","resourceVersion":"390","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"474c55dfb64741cc485e46b6bb9f2dc0","kubernetes.io/config.mirror":"474c55dfb64741cc485e46b6bb9f2dc0","kubernetes.io/config.seen":"2021-05-26T21:23:43.638987620Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con
fig.mirror":{},"f:kubernetes.io/config.seen":{},"f:kubernetes.io/config [truncated 7083 chars]
	I0526 21:25:02.888835  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:25:02.888852  527485 round_trippers.go:429] Request Headers:
	I0526 21:25:02.888874  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:25:02.888882  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:25:02.891010  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:25:02.891022  527485 round_trippers.go:451] Response Headers:
	I0526 21:25:02.891026  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:25:02.891029  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:25:02.891032  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:25:02.891036  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:25:02.891040  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:25:02 GMT
	I0526 21:25:02.891390  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:25:03.386295  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20210526212238-510955
	I0526 21:25:03.386313  527485 round_trippers.go:429] Request Headers:
	I0526 21:25:03.386318  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:25:03.386324  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:25:03.388600  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:25:03.388616  527485 round_trippers.go:451] Response Headers:
	I0526 21:25:03.388620  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:25:03.388625  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:25:03.388628  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:25:03.388631  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:25:03.388634  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:25:03 GMT
	I0526 21:25:03.388797  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-20210526212238-510955","namespace":"kube-system","uid":"ff663293-6f11-48e7-9409-1637114dc587","resourceVersion":"390","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"474c55dfb64741cc485e46b6bb9f2dc0","kubernetes.io/config.mirror":"474c55dfb64741cc485e46b6bb9f2dc0","kubernetes.io/config.seen":"2021-05-26T21:23:43.638987620Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con
fig.mirror":{},"f:kubernetes.io/config.seen":{},"f:kubernetes.io/config [truncated 7083 chars]
	I0526 21:25:03.389217  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:25:03.389238  527485 round_trippers.go:429] Request Headers:
	I0526 21:25:03.389245  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:25:03.389249  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:25:03.391648  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:25:03.391659  527485 round_trippers.go:451] Response Headers:
	I0526 21:25:03.391662  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:25:03.391665  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:25:03.391668  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:25:03.391671  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:25:03.391680  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:25:03 GMT
	I0526 21:25:03.392346  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:25:03.885913  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20210526212238-510955
	I0526 21:25:03.885952  527485 round_trippers.go:429] Request Headers:
	I0526 21:25:03.885964  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:25:03.885980  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:25:03.889239  527485 round_trippers.go:448] Response Status: 200 OK in 3 milliseconds
	I0526 21:25:03.889254  527485 round_trippers.go:451] Response Headers:
	I0526 21:25:03.889260  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:25:03.889264  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:25:03.889269  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:25:03.889274  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:25:03 GMT
	I0526 21:25:03.889278  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:25:03.889898  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-20210526212238-510955","namespace":"kube-system","uid":"ff663293-6f11-48e7-9409-1637114dc587","resourceVersion":"390","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"474c55dfb64741cc485e46b6bb9f2dc0","kubernetes.io/config.mirror":"474c55dfb64741cc485e46b6bb9f2dc0","kubernetes.io/config.seen":"2021-05-26T21:23:43.638987620Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con
fig.mirror":{},"f:kubernetes.io/config.seen":{},"f:kubernetes.io/config [truncated 7083 chars]
	I0526 21:25:03.890183  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:25:03.890196  527485 round_trippers.go:429] Request Headers:
	I0526 21:25:03.890200  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:25:03.890204  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:25:03.892796  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:25:03.892809  527485 round_trippers.go:451] Response Headers:
	I0526 21:25:03.892815  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:25:03.892820  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:25:03 GMT
	I0526 21:25:03.892824  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:25:03.892828  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:25:03.892833  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:25:03.893214  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:25:04.385953  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20210526212238-510955
	I0526 21:25:04.385974  527485 round_trippers.go:429] Request Headers:
	I0526 21:25:04.385980  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:25:04.385986  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:25:04.388914  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:25:04.388930  527485 round_trippers.go:451] Response Headers:
	I0526 21:25:04.388935  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:25:04.388939  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:25:04.388943  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:25:04.388946  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:25:04 GMT
	I0526 21:25:04.388949  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:25:04.389945  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-20210526212238-510955","namespace":"kube-system","uid":"ff663293-6f11-48e7-9409-1637114dc587","resourceVersion":"390","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"474c55dfb64741cc485e46b6bb9f2dc0","kubernetes.io/config.mirror":"474c55dfb64741cc485e46b6bb9f2dc0","kubernetes.io/config.seen":"2021-05-26T21:23:43.638987620Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con
fig.mirror":{},"f:kubernetes.io/config.seen":{},"f:kubernetes.io/config [truncated 7083 chars]
	I0526 21:25:04.390218  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:25:04.390229  527485 round_trippers.go:429] Request Headers:
	I0526 21:25:04.390234  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:25:04.390238  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:25:04.392034  527485 round_trippers.go:448] Response Status: 200 OK in 1 milliseconds
	I0526 21:25:04.392046  527485 round_trippers.go:451] Response Headers:
	I0526 21:25:04.392050  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:25:04.392053  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:25:04.392058  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:25:04.392061  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:25:04.392064  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:25:04 GMT
	I0526 21:25:04.392268  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:25:04.392559  527485 pod_ready.go:102] pod "kube-controller-manager-multinode-20210526212238-510955" in "kube-system" namespace has status "Ready":"False"
	I0526 21:25:04.886008  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20210526212238-510955
	I0526 21:25:04.886029  527485 round_trippers.go:429] Request Headers:
	I0526 21:25:04.886037  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:25:04.886042  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:25:04.888599  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:25:04.888618  527485 round_trippers.go:451] Response Headers:
	I0526 21:25:04.888624  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:25:04.888629  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:25:04.888634  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:25:04.888638  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:25:04.888642  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:25:04 GMT
	I0526 21:25:04.889177  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-20210526212238-510955","namespace":"kube-system","uid":"ff663293-6f11-48e7-9409-1637114dc587","resourceVersion":"390","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"474c55dfb64741cc485e46b6bb9f2dc0","kubernetes.io/config.mirror":"474c55dfb64741cc485e46b6bb9f2dc0","kubernetes.io/config.seen":"2021-05-26T21:23:43.638987620Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con
fig.mirror":{},"f:kubernetes.io/config.seen":{},"f:kubernetes.io/config [truncated 7083 chars]
	I0526 21:25:04.889477  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:25:04.889491  527485 round_trippers.go:429] Request Headers:
	I0526 21:25:04.889496  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:25:04.889500  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:25:04.892023  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:25:04.892037  527485 round_trippers.go:451] Response Headers:
	I0526 21:25:04.892041  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:25:04.892046  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:25:04.892050  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:25:04.892058  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:25:04.892062  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:25:04 GMT
	I0526 21:25:04.892142  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:25:05.386081  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20210526212238-510955
	I0526 21:25:05.386121  527485 round_trippers.go:429] Request Headers:
	I0526 21:25:05.386137  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:25:05.386148  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:25:05.388646  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:25:05.388658  527485 round_trippers.go:451] Response Headers:
	I0526 21:25:05.388662  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:25:05.388665  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:25:05 GMT
	I0526 21:25:05.388668  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:25:05.388672  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:25:05.388676  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:25:05.389151  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-20210526212238-510955","namespace":"kube-system","uid":"ff663293-6f11-48e7-9409-1637114dc587","resourceVersion":"390","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"474c55dfb64741cc485e46b6bb9f2dc0","kubernetes.io/config.mirror":"474c55dfb64741cc485e46b6bb9f2dc0","kubernetes.io/config.seen":"2021-05-26T21:23:43.638987620Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con
fig.mirror":{},"f:kubernetes.io/config.seen":{},"f:kubernetes.io/config [truncated 7083 chars]
	I0526 21:25:05.389448  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:25:05.389462  527485 round_trippers.go:429] Request Headers:
	I0526 21:25:05.389468  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:25:05.389473  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:25:05.391731  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:25:05.391744  527485 round_trippers.go:451] Response Headers:
	I0526 21:25:05.391750  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:25:05.391754  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:25:05.391758  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:25:05.391763  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:25:05.391776  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:25:05 GMT
	I0526 21:25:05.392249  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:25:05.886100  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20210526212238-510955
	I0526 21:25:05.886120  527485 round_trippers.go:429] Request Headers:
	I0526 21:25:05.886125  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:25:05.886128  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:25:05.889414  527485 round_trippers.go:448] Response Status: 200 OK in 3 milliseconds
	I0526 21:25:05.889429  527485 round_trippers.go:451] Response Headers:
	I0526 21:25:05.889433  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:25:05.889437  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:25:05.889442  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:25:05.889446  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:25:05 GMT
	I0526 21:25:05.889451  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:25:05.889933  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-20210526212238-510955","namespace":"kube-system","uid":"ff663293-6f11-48e7-9409-1637114dc587","resourceVersion":"390","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"474c55dfb64741cc485e46b6bb9f2dc0","kubernetes.io/config.mirror":"474c55dfb64741cc485e46b6bb9f2dc0","kubernetes.io/config.seen":"2021-05-26T21:23:43.638987620Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con
fig.mirror":{},"f:kubernetes.io/config.seen":{},"f:kubernetes.io/config [truncated 7083 chars]
	I0526 21:25:05.890197  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:25:05.890208  527485 round_trippers.go:429] Request Headers:
	I0526 21:25:05.890215  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:25:05.890218  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:25:05.892713  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:25:05.892724  527485 round_trippers.go:451] Response Headers:
	I0526 21:25:05.892729  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:25:05.892734  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:25:05.892738  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:25:05.892741  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:25:05.892744  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:25:05 GMT
	I0526 21:25:05.893120  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:25:06.385970  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20210526212238-510955
	I0526 21:25:06.385989  527485 round_trippers.go:429] Request Headers:
	I0526 21:25:06.385994  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:25:06.385998  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:25:06.388784  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:25:06.388805  527485 round_trippers.go:451] Response Headers:
	I0526 21:25:06.388809  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:25:06 GMT
	I0526 21:25:06.388819  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:25:06.388825  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:25:06.388830  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:25:06.388834  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:25:06.389178  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-20210526212238-510955","namespace":"kube-system","uid":"ff663293-6f11-48e7-9409-1637114dc587","resourceVersion":"390","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"474c55dfb64741cc485e46b6bb9f2dc0","kubernetes.io/config.mirror":"474c55dfb64741cc485e46b6bb9f2dc0","kubernetes.io/config.seen":"2021-05-26T21:23:43.638987620Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con
fig.mirror":{},"f:kubernetes.io/config.seen":{},"f:kubernetes.io/config [truncated 7083 chars]
	I0526 21:25:06.389554  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:25:06.389571  527485 round_trippers.go:429] Request Headers:
	I0526 21:25:06.389577  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:25:06.389583  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:25:06.392024  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:25:06.392043  527485 round_trippers.go:451] Response Headers:
	I0526 21:25:06.392048  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:25:06.392053  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:25:06.392057  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:25:06 GMT
	I0526 21:25:06.392062  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:25:06.392066  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:25:06.392258  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:25:06.886146  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20210526212238-510955
	I0526 21:25:06.886166  527485 round_trippers.go:429] Request Headers:
	I0526 21:25:06.886171  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:25:06.886175  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:25:06.889263  527485 round_trippers.go:448] Response Status: 200 OK in 3 milliseconds
	I0526 21:25:06.889277  527485 round_trippers.go:451] Response Headers:
	I0526 21:25:06.889281  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:25:06.889284  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:25:06 GMT
	I0526 21:25:06.889288  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:25:06.889291  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:25:06.889294  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:25:06.889456  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-20210526212238-510955","namespace":"kube-system","uid":"ff663293-6f11-48e7-9409-1637114dc587","resourceVersion":"390","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"474c55dfb64741cc485e46b6bb9f2dc0","kubernetes.io/config.mirror":"474c55dfb64741cc485e46b6bb9f2dc0","kubernetes.io/config.seen":"2021-05-26T21:23:43.638987620Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con
fig.mirror":{},"f:kubernetes.io/config.seen":{},"f:kubernetes.io/config [truncated 7083 chars]
	I0526 21:25:06.889829  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:25:06.889840  527485 round_trippers.go:429] Request Headers:
	I0526 21:25:06.889844  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:25:06.889849  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:25:06.892026  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:25:06.892038  527485 round_trippers.go:451] Response Headers:
	I0526 21:25:06.892043  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:25:06.892049  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:25:06.892054  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:25:06.892058  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:25:06.892062  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:25:06 GMT
	I0526 21:25:06.892188  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:25:06.892495  527485 pod_ready.go:102] pod "kube-controller-manager-multinode-20210526212238-510955" in "kube-system" namespace has status "Ready":"False"
	I0526 21:25:07.386079  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20210526212238-510955
	I0526 21:25:07.386099  527485 round_trippers.go:429] Request Headers:
	I0526 21:25:07.386104  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:25:07.386108  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:25:07.388338  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:25:07.388348  527485 round_trippers.go:451] Response Headers:
	I0526 21:25:07.388353  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:25:07.388357  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:25:07.388361  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:25:07.388366  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:25:07.388371  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:25:07 GMT
	I0526 21:25:07.388649  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-20210526212238-510955","namespace":"kube-system","uid":"ff663293-6f11-48e7-9409-1637114dc587","resourceVersion":"390","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"474c55dfb64741cc485e46b6bb9f2dc0","kubernetes.io/config.mirror":"474c55dfb64741cc485e46b6bb9f2dc0","kubernetes.io/config.seen":"2021-05-26T21:23:43.638987620Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con
fig.mirror":{},"f:kubernetes.io/config.seen":{},"f:kubernetes.io/config [truncated 7083 chars]
	I0526 21:25:07.389070  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:25:07.389089  527485 round_trippers.go:429] Request Headers:
	I0526 21:25:07.389095  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:25:07.389101  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:25:07.391381  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:25:07.391392  527485 round_trippers.go:451] Response Headers:
	I0526 21:25:07.391395  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:25:07.391398  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:25:07.391404  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:25:07.391407  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:25:07.391410  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:25:07 GMT
	I0526 21:25:07.391608  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:25:07.886386  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20210526212238-510955
	I0526 21:25:07.886404  527485 round_trippers.go:429] Request Headers:
	I0526 21:25:07.886409  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:25:07.886413  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:25:07.888909  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:25:07.888943  527485 round_trippers.go:451] Response Headers:
	I0526 21:25:07.888948  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:25:07.888953  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:25:07.888957  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:25:07.888962  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:25:07 GMT
	I0526 21:25:07.888966  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:25:07.889332  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-20210526212238-510955","namespace":"kube-system","uid":"ff663293-6f11-48e7-9409-1637114dc587","resourceVersion":"390","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"474c55dfb64741cc485e46b6bb9f2dc0","kubernetes.io/config.mirror":"474c55dfb64741cc485e46b6bb9f2dc0","kubernetes.io/config.seen":"2021-05-26T21:23:43.638987620Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con
fig.mirror":{},"f:kubernetes.io/config.seen":{},"f:kubernetes.io/config [truncated 7083 chars]
	I0526 21:25:07.889710  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:25:07.889729  527485 round_trippers.go:429] Request Headers:
	I0526 21:25:07.889735  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:25:07.889741  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:25:07.892177  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:25:07.892196  527485 round_trippers.go:451] Response Headers:
	I0526 21:25:07.892203  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:25:07.892208  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:25:07 GMT
	I0526 21:25:07.892214  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:25:07.892222  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:25:07.892227  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:25:07.892641  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:25:08.386366  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20210526212238-510955
	I0526 21:25:08.386386  527485 round_trippers.go:429] Request Headers:
	I0526 21:25:08.386391  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:25:08.386396  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:25:08.388585  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:25:08.388597  527485 round_trippers.go:451] Response Headers:
	I0526 21:25:08.388602  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:25:08.388607  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:25:08 GMT
	I0526 21:25:08.388611  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:25:08.388616  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:25:08.388620  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:25:08.388770  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-20210526212238-510955","namespace":"kube-system","uid":"ff663293-6f11-48e7-9409-1637114dc587","resourceVersion":"390","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"474c55dfb64741cc485e46b6bb9f2dc0","kubernetes.io/config.mirror":"474c55dfb64741cc485e46b6bb9f2dc0","kubernetes.io/config.seen":"2021-05-26T21:23:43.638987620Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con
fig.mirror":{},"f:kubernetes.io/config.seen":{},"f:kubernetes.io/config [truncated 7083 chars]
	I0526 21:25:08.389223  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:25:08.389247  527485 round_trippers.go:429] Request Headers:
	I0526 21:25:08.389255  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:25:08.389271  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:25:08.391393  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:25:08.391406  527485 round_trippers.go:451] Response Headers:
	I0526 21:25:08.391409  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:25:08.391413  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:25:08.391416  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:25:08.391418  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:25:08.391422  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:25:08 GMT
	I0526 21:25:08.391675  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:25:08.886010  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20210526212238-510955
	I0526 21:25:08.886030  527485 round_trippers.go:429] Request Headers:
	I0526 21:25:08.886037  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:25:08.886043  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:25:08.888255  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:25:08.888279  527485 round_trippers.go:451] Response Headers:
	I0526 21:25:08.888285  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:25:08.888290  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:25:08.888294  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:25:08.888298  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:25:08 GMT
	I0526 21:25:08.888302  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:25:08.888526  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-20210526212238-510955","namespace":"kube-system","uid":"ff663293-6f11-48e7-9409-1637114dc587","resourceVersion":"390","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"474c55dfb64741cc485e46b6bb9f2dc0","kubernetes.io/config.mirror":"474c55dfb64741cc485e46b6bb9f2dc0","kubernetes.io/config.seen":"2021-05-26T21:23:43.638987620Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con
fig.mirror":{},"f:kubernetes.io/config.seen":{},"f:kubernetes.io/config [truncated 7083 chars]
	I0526 21:25:08.888895  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:25:08.888909  527485 round_trippers.go:429] Request Headers:
	I0526 21:25:08.888914  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:25:08.888918  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:25:08.891198  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:25:08.891211  527485 round_trippers.go:451] Response Headers:
	I0526 21:25:08.891215  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:25:08.891219  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:25:08.891222  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:25:08 GMT
	I0526 21:25:08.891225  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:25:08.891228  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:25:08.891430  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:25:09.386434  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20210526212238-510955
	I0526 21:25:09.386464  527485 round_trippers.go:429] Request Headers:
	I0526 21:25:09.386470  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:25:09.386474  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:25:09.392555  527485 round_trippers.go:448] Response Status: 200 OK in 6 milliseconds
	I0526 21:25:09.392584  527485 round_trippers.go:451] Response Headers:
	I0526 21:25:09.392591  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:25:09.392596  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:25:09 GMT
	I0526 21:25:09.392601  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:25:09.392605  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:25:09.392610  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:25:09.392884  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-20210526212238-510955","namespace":"kube-system","uid":"ff663293-6f11-48e7-9409-1637114dc587","resourceVersion":"546","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"474c55dfb64741cc485e46b6bb9f2dc0","kubernetes.io/config.mirror":"474c55dfb64741cc485e46b6bb9f2dc0","kubernetes.io/config.seen":"2021-05-26T21:23:43.638987620Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:25:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con
fig.mirror":{},"f:kubernetes.io/config.seen":{},"f:kubernetes.io/config [truncated 6822 chars]
	I0526 21:25:09.393390  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:25:09.393410  527485 round_trippers.go:429] Request Headers:
	I0526 21:25:09.393417  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:25:09.393423  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:25:09.396417  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:25:09.396438  527485 round_trippers.go:451] Response Headers:
	I0526 21:25:09.396445  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:25:09.396451  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:25:09.396456  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:25:09.396464  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:25:09.396471  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:25:09 GMT
	I0526 21:25:09.397176  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:25:09.397504  527485 pod_ready.go:92] pod "kube-controller-manager-multinode-20210526212238-510955" in "kube-system" namespace has status "Ready":"True"
	I0526 21:25:09.397532  527485 pod_ready.go:81] duration metric: took 7.017118929s waiting for pod "kube-controller-manager-multinode-20210526212238-510955" in "kube-system" namespace to be "Ready" ...
	I0526 21:25:09.397550  527485 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qbl42" in "kube-system" namespace to be "Ready" ...
	I0526 21:25:09.397615  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qbl42
	I0526 21:25:09.397627  527485 round_trippers.go:429] Request Headers:
	I0526 21:25:09.397635  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:25:09.397642  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:25:09.399460  527485 round_trippers.go:448] Response Status: 200 OK in 1 milliseconds
	I0526 21:25:09.399478  527485 round_trippers.go:451] Response Headers:
	I0526 21:25:09.399483  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:25:09 GMT
	I0526 21:25:09.399489  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:25:09.399498  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:25:09.399503  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:25:09.399507  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:25:09.399661  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-qbl42","generateName":"kube-proxy-","namespace":"kube-system","uid":"950a915d-c5f0-4e6f-bc12-ee97013032f0","resourceVersion":"453","creationTimestamp":"2021-05-26T21:23:53Z","labels":{"controller-revision-hash":"b89db7f56","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"59f7a309-d89a-4050-8e82-fc8da888387f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"59f7a309-d89a-4050-8e82-fc8da888387f\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller"
:{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:affinity":{".":{ [truncated 5529 chars]
	I0526 21:25:09.399960  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:25:09.399974  527485 round_trippers.go:429] Request Headers:
	I0526 21:25:09.399980  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:25:09.399986  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:25:09.402671  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:25:09.402689  527485 round_trippers.go:451] Response Headers:
	I0526 21:25:09.402694  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:25:09 GMT
	I0526 21:25:09.402699  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:25:09.402703  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:25:09.402707  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:25:09.402711  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:25:09.402976  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:25:09.403265  527485 pod_ready.go:92] pod "kube-proxy-qbl42" in "kube-system" namespace has status "Ready":"True"
	I0526 21:25:09.403280  527485 pod_ready.go:81] duration metric: took 5.713239ms waiting for pod "kube-proxy-qbl42" in "kube-system" namespace to be "Ready" ...
	I0526 21:25:09.403289  527485 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-20210526212238-510955" in "kube-system" namespace to be "Ready" ...
	I0526 21:25:09.403335  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-20210526212238-510955
	I0526 21:25:09.403345  527485 round_trippers.go:429] Request Headers:
	I0526 21:25:09.403349  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:25:09.403353  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:25:09.404960  527485 round_trippers.go:448] Response Status: 200 OK in 1 milliseconds
	I0526 21:25:09.404975  527485 round_trippers.go:451] Response Headers:
	I0526 21:25:09.404981  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:25:09.404986  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:25:09.404990  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:25:09.404994  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:25:09.404998  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:25:09 GMT
	I0526 21:25:09.405340  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-20210526212238-510955","namespace":"kube-system","uid":"66bb91fe-7af2-400f-a477-fe2dc3428e83","resourceVersion":"344","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"6b4a0ee8b3d15a1c2e47c15d32e6eb0d","kubernetes.io/config.mirror":"6b4a0ee8b3d15a1c2e47c15d32e6eb0d","kubernetes.io/config.seen":"2021-05-26T21:23:43.638976446Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:
kubernetes.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:la [truncated 4795 chars]
	I0526 21:25:09.405641  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:25:09.405660  527485 round_trippers.go:429] Request Headers:
	I0526 21:25:09.405667  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:25:09.405673  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:25:09.407536  527485 round_trippers.go:448] Response Status: 200 OK in 1 milliseconds
	I0526 21:25:09.407552  527485 round_trippers.go:451] Response Headers:
	I0526 21:25:09.407558  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:25:09.407564  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:25:09.407572  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:25:09.407585  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:25:09.407590  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:25:09 GMT
	I0526 21:25:09.407679  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:25:09.908611  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-20210526212238-510955
	I0526 21:25:09.908654  527485 round_trippers.go:429] Request Headers:
	I0526 21:25:09.908667  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:25:09.908678  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:25:09.911382  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:25:09.911396  527485 round_trippers.go:451] Response Headers:
	I0526 21:25:09.911399  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:25:09 GMT
	I0526 21:25:09.911402  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:25:09.911405  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:25:09.911408  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:25:09.911411  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:25:09.911640  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-20210526212238-510955","namespace":"kube-system","uid":"66bb91fe-7af2-400f-a477-fe2dc3428e83","resourceVersion":"344","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"6b4a0ee8b3d15a1c2e47c15d32e6eb0d","kubernetes.io/config.mirror":"6b4a0ee8b3d15a1c2e47c15d32e6eb0d","kubernetes.io/config.seen":"2021-05-26T21:23:43.638976446Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:
kubernetes.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:la [truncated 4795 chars]
	I0526 21:25:09.911947  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:25:09.911965  527485 round_trippers.go:429] Request Headers:
	I0526 21:25:09.911972  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:25:09.911979  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:25:09.914076  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:25:09.914089  527485 round_trippers.go:451] Response Headers:
	I0526 21:25:09.914097  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:25:09.914101  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:25:09 GMT
	I0526 21:25:09.914104  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:25:09.914106  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:25:09.914115  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:25:09.914297  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:25:10.408088  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-20210526212238-510955
	I0526 21:25:10.408107  527485 round_trippers.go:429] Request Headers:
	I0526 21:25:10.408111  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:25:10.408115  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:25:10.410208  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:25:10.410228  527485 round_trippers.go:451] Response Headers:
	I0526 21:25:10.410234  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:25:10.410238  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:25:10.410243  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:25:10.410247  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:25:10 GMT
	I0526 21:25:10.410253  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:25:10.410415  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-20210526212238-510955","namespace":"kube-system","uid":"66bb91fe-7af2-400f-a477-fe2dc3428e83","resourceVersion":"547","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"6b4a0ee8b3d15a1c2e47c15d32e6eb0d","kubernetes.io/config.mirror":"6b4a0ee8b3d15a1c2e47c15d32e6eb0d","kubernetes.io/config.seen":"2021-05-26T21:23:43.638976446Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:25:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:
kubernetes.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:la [truncated 4552 chars]
	I0526 21:25:10.410665  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:25:10.410678  527485 round_trippers.go:429] Request Headers:
	I0526 21:25:10.410683  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:25:10.410687  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:25:10.412777  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:25:10.412794  527485 round_trippers.go:451] Response Headers:
	I0526 21:25:10.412803  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:25:10.412808  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:25:10.412812  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:25:10 GMT
	I0526 21:25:10.412817  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:25:10.412821  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:25:10.413168  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:25:10.413474  527485 pod_ready.go:92] pod "kube-scheduler-multinode-20210526212238-510955" in "kube-system" namespace has status "Ready":"True"
	I0526 21:25:10.413501  527485 pod_ready.go:81] duration metric: took 1.010202839s waiting for pod "kube-scheduler-multinode-20210526212238-510955" in "kube-system" namespace to be "Ready" ...
	I0526 21:25:10.413517  527485 pod_ready.go:38] duration metric: took 1m6.076583011s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0526 21:25:10.413541  527485 api_server.go:50] waiting for apiserver process to appear ...
	I0526 21:25:10.413561  527485 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0526 21:25:10.413618  527485 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0526 21:25:10.431796  527485 command_runner.go:124] > a0581c0e5409bf38a71c61d4cf776a7fcd3dc38c24a5a6a52165552afd2bc85c
	I0526 21:25:10.433015  527485 cri.go:76] found id: "a0581c0e5409bf38a71c61d4cf776a7fcd3dc38c24a5a6a52165552afd2bc85c"
	I0526 21:25:10.433031  527485 cri.go:76] found id: ""
	I0526 21:25:10.433039  527485 logs.go:270] 1 containers: [a0581c0e5409bf38a71c61d4cf776a7fcd3dc38c24a5a6a52165552afd2bc85c]
	I0526 21:25:10.433084  527485 ssh_runner.go:149] Run: which crictl
	I0526 21:25:10.437650  527485 command_runner.go:124] > /bin/crictl
	I0526 21:25:10.437679  527485 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0526 21:25:10.437721  527485 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=etcd
	I0526 21:25:10.456742  527485 command_runner.go:124] > c8538106e966bebc2d596c1688fdb11c413a37e122bd18c454b4db5bec7f55ad
	I0526 21:25:10.458507  527485 cri.go:76] found id: "c8538106e966bebc2d596c1688fdb11c413a37e122bd18c454b4db5bec7f55ad"
	I0526 21:25:10.458524  527485 cri.go:76] found id: ""
	I0526 21:25:10.458530  527485 logs.go:270] 1 containers: [c8538106e966bebc2d596c1688fdb11c413a37e122bd18c454b4db5bec7f55ad]
	I0526 21:25:10.458564  527485 ssh_runner.go:149] Run: which crictl
	I0526 21:25:10.462771  527485 command_runner.go:124] > /bin/crictl
	I0526 21:25:10.462801  527485 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0526 21:25:10.462837  527485 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=coredns
	I0526 21:25:10.481966  527485 command_runner.go:124] > a9593dff4428d4d0f5cc31832cefb42bf39b62a98d32d295dd9502a1ef6d307a
	I0526 21:25:10.482087  527485 cri.go:76] found id: "a9593dff4428d4d0f5cc31832cefb42bf39b62a98d32d295dd9502a1ef6d307a"
	I0526 21:25:10.482101  527485 cri.go:76] found id: ""
	I0526 21:25:10.482106  527485 logs.go:270] 1 containers: [a9593dff4428d4d0f5cc31832cefb42bf39b62a98d32d295dd9502a1ef6d307a]
	I0526 21:25:10.482140  527485 ssh_runner.go:149] Run: which crictl
	I0526 21:25:10.486730  527485 command_runner.go:124] > /bin/crictl
	I0526 21:25:10.486770  527485 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0526 21:25:10.486805  527485 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0526 21:25:10.505145  527485 command_runner.go:124] > e6bb9bee7539aa8f41eb665706dd077c17b8c28a54090cf4fffe439978194c08
	I0526 21:25:10.505170  527485 cri.go:76] found id: "e6bb9bee7539aa8f41eb665706dd077c17b8c28a54090cf4fffe439978194c08"
	I0526 21:25:10.505176  527485 cri.go:76] found id: ""
	I0526 21:25:10.505180  527485 logs.go:270] 1 containers: [e6bb9bee7539aa8f41eb665706dd077c17b8c28a54090cf4fffe439978194c08]
	I0526 21:25:10.505215  527485 ssh_runner.go:149] Run: which crictl
	I0526 21:25:10.508909  527485 command_runner.go:124] > /bin/crictl
	I0526 21:25:10.509381  527485 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0526 21:25:10.509430  527485 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0526 21:25:10.528716  527485 command_runner.go:124] > de6efc6fec4b20903efb849817a9e57e284841a9fc6ef9bd1b83e563316cdaa2
	I0526 21:25:10.528805  527485 cri.go:76] found id: "de6efc6fec4b20903efb849817a9e57e284841a9fc6ef9bd1b83e563316cdaa2"
	I0526 21:25:10.528825  527485 cri.go:76] found id: ""
	I0526 21:25:10.528832  527485 logs.go:270] 1 containers: [de6efc6fec4b20903efb849817a9e57e284841a9fc6ef9bd1b83e563316cdaa2]
	I0526 21:25:10.528889  527485 ssh_runner.go:149] Run: which crictl
	I0526 21:25:10.532768  527485 command_runner.go:124] > /bin/crictl
	I0526 21:25:10.533363  527485 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0526 21:25:10.533409  527485 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0526 21:25:10.551820  527485 cri.go:76] found id: ""
	I0526 21:25:10.551840  527485 logs.go:270] 0 containers: []
	W0526 21:25:10.551846  527485 logs.go:272] No container was found matching "kubernetes-dashboard"
	I0526 21:25:10.551853  527485 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0526 21:25:10.551889  527485 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0526 21:25:10.571635  527485 command_runner.go:124] > 5d3df8c94eaedafbc7bb1e79012cbc72bc2ec88687f4bfc6f10fff8c1eb3107d
	I0526 21:25:10.571702  527485 cri.go:76] found id: "5d3df8c94eaedafbc7bb1e79012cbc72bc2ec88687f4bfc6f10fff8c1eb3107d"
	I0526 21:25:10.571722  527485 cri.go:76] found id: ""
	I0526 21:25:10.571729  527485 logs.go:270] 1 containers: [5d3df8c94eaedafbc7bb1e79012cbc72bc2ec88687f4bfc6f10fff8c1eb3107d]
	I0526 21:25:10.571761  527485 ssh_runner.go:149] Run: which crictl
	I0526 21:25:10.575596  527485 command_runner.go:124] > /bin/crictl
	I0526 21:25:10.575627  527485 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0526 21:25:10.575664  527485 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0526 21:25:10.594214  527485 command_runner.go:124] > 2314e41b1b44395e7b5a7c32d6859941d4a5db601171f0dd18ddefb571d25f18
	I0526 21:25:10.594891  527485 cri.go:76] found id: "2314e41b1b44395e7b5a7c32d6859941d4a5db601171f0dd18ddefb571d25f18"
	I0526 21:25:10.594908  527485 cri.go:76] found id: ""
	I0526 21:25:10.594914  527485 logs.go:270] 1 containers: [2314e41b1b44395e7b5a7c32d6859941d4a5db601171f0dd18ddefb571d25f18]
	I0526 21:25:10.594943  527485 ssh_runner.go:149] Run: which crictl
	I0526 21:25:10.599658  527485 command_runner.go:124] > /bin/crictl
	I0526 21:25:10.599691  527485 logs.go:123] Gathering logs for dmesg ...
	I0526 21:25:10.599704  527485 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0526 21:25:10.610167  527485 command_runner.go:124] > [May26 21:22] You have booted with nomodeset. This means your GPU drivers are DISABLED
	I0526 21:25:10.610190  527485 command_runner.go:124] > [  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	I0526 21:25:10.610200  527485 command_runner.go:124] > [  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	I0526 21:25:10.610210  527485 command_runner.go:124] > [  +0.092301] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	I0526 21:25:10.610218  527485 command_runner.go:124] > [  +3.726361] Unstable clock detected, switching default tracing clock to "global"
	I0526 21:25:10.610225  527485 command_runner.go:124] >               If you want to keep using the local clock, then add:
	I0526 21:25:10.610231  527485 command_runner.go:124] >                 "trace_clock=local"
	I0526 21:25:10.610235  527485 command_runner.go:124] >               on the kernel command line
	I0526 21:25:10.610246  527485 command_runner.go:124] > [  +0.000018] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	I0526 21:25:10.610253  527485 command_runner.go:124] > [  +3.393840] systemd-fstab-generator[1161]: Ignoring "noauto" for root device
	I0526 21:25:10.610265  527485 command_runner.go:124] > [  +0.034647] systemd[1]: system-getty.slice: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	I0526 21:25:10.610274  527485 command_runner.go:124] > [  +0.000003] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	I0526 21:25:10.610286  527485 command_runner.go:124] > [  +0.775022] SELinux: unrecognized netlink message: protocol=0 nlmsg_type=106 sclass=netlink_route_socket pid=1723 comm=systemd-network
	I0526 21:25:10.610298  527485 command_runner.go:124] > [  +1.684954] vboxguest: loading out-of-tree module taints kernel.
	I0526 21:25:10.610313  527485 command_runner.go:124] > [  +0.006011] vboxguest: PCI device not found, probably running on physical hardware.
	I0526 21:25:10.610330  527485 command_runner.go:124] > [  +1.532510] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack.
	I0526 21:25:10.610340  527485 command_runner.go:124] > [May26 21:23] systemd-fstab-generator[2097]: Ignoring "noauto" for root device
	I0526 21:25:10.610354  527485 command_runner.go:124] > [  +0.282151] systemd-fstab-generator[2145]: Ignoring "noauto" for root device
	I0526 21:25:10.610367  527485 command_runner.go:124] > [  +9.202259] systemd-fstab-generator[2335]: Ignoring "noauto" for root device
	I0526 21:25:10.610379  527485 command_runner.go:124] > [ +16.373129] systemd-fstab-generator[2754]: Ignoring "noauto" for root device
	I0526 21:25:10.610388  527485 command_runner.go:124] > [ +16.598445] kauditd_printk_skb: 38 callbacks suppressed
	I0526 21:25:10.610400  527485 command_runner.go:124] > [May26 21:24] kauditd_printk_skb: 50 callbacks suppressed
	I0526 21:25:10.610409  527485 command_runner.go:124] > [ +45.152218] NFSD: Unable to end grace period: -110
	I0526 21:25:10.611372  527485 logs.go:123] Gathering logs for kube-apiserver [a0581c0e5409bf38a71c61d4cf776a7fcd3dc38c24a5a6a52165552afd2bc85c] ...
	I0526 21:25:10.611385  527485 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 a0581c0e5409bf38a71c61d4cf776a7fcd3dc38c24a5a6a52165552afd2bc85c"
	I0526 21:25:10.629290  527485 command_runner.go:124] ! Flag --insecure-port has been deprecated, This flag has no effect now and will be removed in v1.24.
	I0526 21:25:10.629469  527485 command_runner.go:124] ! I0526 21:23:29.805604       1 server.go:632] external host was not specified, using 192.168.39.229
	I0526 21:25:10.629586  527485 command_runner.go:124] ! I0526 21:23:29.806982       1 server.go:182] Version: v1.20.2
	I0526 21:25:10.629639  527485 command_runner.go:124] ! I0526 21:23:30.593640       1 shared_informer.go:240] Waiting for caches to sync for node_authorizer
	I0526 21:25:10.629742  527485 command_runner.go:124] ! I0526 21:23:30.598821       1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0526 21:25:10.630141  527485 command_runner.go:124] ! I0526 21:23:30.598945       1 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
	I0526 21:25:10.630210  527485 command_runner.go:124] ! I0526 21:23:30.600954       1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0526 21:25:10.630270  527485 command_runner.go:124] ! I0526 21:23:30.601309       1 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
	I0526 21:25:10.630413  527485 command_runner.go:124] ! I0526 21:23:30.616590       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.630534  527485 command_runner.go:124] ! I0526 21:23:30.617065       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.630581  527485 command_runner.go:124] ! I0526 21:23:30.995013       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.630779  527485 command_runner.go:124] ! I0526 21:23:30.995139       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.630887  527485 command_runner.go:124] ! I0526 21:23:31.030659       1 client.go:360] parsed scheme: "passthrough"
	I0526 21:25:10.631019  527485 command_runner.go:124] ! I0526 21:23:31.031231       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0526 21:25:10.631078  527485 command_runner.go:124] ! I0526 21:23:31.031324       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0526 21:25:10.631184  527485 command_runner.go:124] ! I0526 21:23:31.032369       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.631276  527485 command_runner.go:124] ! I0526 21:23:31.032725       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.631389  527485 command_runner.go:124] ! I0526 21:23:31.143094       1 instance.go:289] Using reconciler: lease
	I0526 21:25:10.631554  527485 command_runner.go:124] ! I0526 21:23:31.148814       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.631650  527485 command_runner.go:124] ! I0526 21:23:31.148936       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.631741  527485 command_runner.go:124] ! I0526 21:23:31.164327       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.632118  527485 command_runner.go:124] ! I0526 21:23:31.164627       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.632166  527485 command_runner.go:124] ! I0526 21:23:31.183831       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.632438  527485 command_runner.go:124] ! I0526 21:23:31.184185       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.632496  527485 command_runner.go:124] ! I0526 21:23:31.203621       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.632605  527485 command_runner.go:124] ! I0526 21:23:31.204140       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.632679  527485 command_runner.go:124] ! I0526 21:23:31.218608       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.633053  527485 command_runner.go:124] ! I0526 21:23:31.218929       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.633100  527485 command_runner.go:124] ! I0526 21:23:31.235670       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.633346  527485 command_runner.go:124] ! I0526 21:23:31.235780       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.633456  527485 command_runner.go:124] ! I0526 21:23:31.248767       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.633511  527485 command_runner.go:124] ! I0526 21:23:31.248973       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.633930  527485 command_runner.go:124] ! I0526 21:23:31.270717       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.633984  527485 command_runner.go:124] ! I0526 21:23:31.272045       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.634373  527485 command_runner.go:124] ! I0526 21:23:31.287807       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.634762  527485 command_runner.go:124] ! I0526 21:23:31.288158       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.634999  527485 command_runner.go:124] ! I0526 21:23:31.302175       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.635061  527485 command_runner.go:124] ! I0526 21:23:31.302294       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.635164  527485 command_runner.go:124] ! I0526 21:23:31.318788       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.635294  527485 command_runner.go:124] ! I0526 21:23:31.318898       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.635376  527485 command_runner.go:124] ! I0526 21:23:31.340681       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.635448  527485 command_runner.go:124] ! I0526 21:23:31.341103       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.635619  527485 command_runner.go:124] ! I0526 21:23:31.364875       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.635675  527485 command_runner.go:124] ! I0526 21:23:31.365260       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.636120  527485 command_runner.go:124] ! I0526 21:23:31.375229       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.636406  527485 command_runner.go:124] ! I0526 21:23:31.375353       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.636562  527485 command_runner.go:124] ! I0526 21:23:31.384385       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.636629  527485 command_runner.go:124] ! I0526 21:23:31.384585       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.636713  527485 command_runner.go:124] ! I0526 21:23:31.392770       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.637831  527485 command_runner.go:124] ! I0526 21:23:31.392939       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.637847  527485 command_runner.go:124] ! I0526 21:23:31.406398       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.637861  527485 command_runner.go:124] ! I0526 21:23:31.406589       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.637871  527485 command_runner.go:124] ! I0526 21:23:31.421828       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.637892  527485 command_runner.go:124] ! I0526 21:23:31.422392       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.637906  527485 command_runner.go:124] ! I0526 21:23:31.434772       1 rest.go:131] the default service ipfamily for this cluster is: IPv4
	I0526 21:25:10.637920  527485 command_runner.go:124] ! I0526 21:23:31.530123       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.637939  527485 command_runner.go:124] ! I0526 21:23:31.530234       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.637953  527485 command_runner.go:124] ! I0526 21:23:31.542917       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.637969  527485 command_runner.go:124] ! I0526 21:23:31.543258       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.637981  527485 command_runner.go:124] ! I0526 21:23:31.558871       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.637997  527485 command_runner.go:124] ! I0526 21:23:31.558975       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.638010  527485 command_runner.go:124] ! I0526 21:23:31.578311       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.638028  527485 command_runner.go:124] ! I0526 21:23:31.578428       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.638042  527485 command_runner.go:124] ! I0526 21:23:31.579212       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.638061  527485 command_runner.go:124] ! I0526 21:23:31.579406       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.638074  527485 command_runner.go:124] ! I0526 21:23:31.593279       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.638092  527485 command_runner.go:124] ! I0526 21:23:31.593392       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.638105  527485 command_runner.go:124] ! I0526 21:23:31.609260       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.638121  527485 command_runner.go:124] ! I0526 21:23:31.609368       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.638134  527485 command_runner.go:124] ! I0526 21:23:31.626851       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.638150  527485 command_runner.go:124] ! I0526 21:23:31.626960       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.638162  527485 command_runner.go:124] ! I0526 21:23:31.653023       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.638196  527485 command_runner.go:124] ! I0526 21:23:31.653138       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.638208  527485 command_runner.go:124] ! I0526 21:23:31.662951       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.638223  527485 command_runner.go:124] ! I0526 21:23:31.663349       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.638236  527485 command_runner.go:124] ! I0526 21:23:31.683106       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.638252  527485 command_runner.go:124] ! I0526 21:23:31.684613       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.638264  527485 command_runner.go:124] ! I0526 21:23:31.700741       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.638283  527485 command_runner.go:124] ! I0526 21:23:31.701266       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.638298  527485 command_runner.go:124] ! I0526 21:23:31.722045       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.638317  527485 command_runner.go:124] ! I0526 21:23:31.722235       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.638332  527485 command_runner.go:124] ! I0526 21:23:31.736295       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.638347  527485 command_runner.go:124] ! I0526 21:23:31.737071       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.638357  527485 command_runner.go:124] ! I0526 21:23:31.751086       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.638366  527485 command_runner.go:124] ! I0526 21:23:31.751202       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.638375  527485 command_runner.go:124] ! I0526 21:23:31.767941       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.638384  527485 command_runner.go:124] ! I0526 21:23:31.768045       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.638390  527485 command_runner.go:124] ! I0526 21:23:31.784917       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.638400  527485 command_runner.go:124] ! I0526 21:23:31.785029       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.638409  527485 command_runner.go:124] ! I0526 21:23:31.802204       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.638418  527485 command_runner.go:124] ! I0526 21:23:31.802314       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.638426  527485 command_runner.go:124] ! I0526 21:23:31.817427       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.638440  527485 command_runner.go:124] ! I0526 21:23:31.817616       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.638454  527485 command_runner.go:124] ! I0526 21:23:31.837841       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.638468  527485 command_runner.go:124] ! I0526 21:23:31.837939       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.638478  527485 command_runner.go:124] ! I0526 21:23:31.860217       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.638493  527485 command_runner.go:124] ! I0526 21:23:31.861221       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.638507  527485 command_runner.go:124] ! I0526 21:23:31.871254       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.638523  527485 command_runner.go:124] ! I0526 21:23:31.872836       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.638534  527485 command_runner.go:124] ! I0526 21:23:31.884052       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.638551  527485 command_runner.go:124] ! I0526 21:23:31.884160       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.638564  527485 command_runner.go:124] ! I0526 21:23:31.898818       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.638578  527485 command_runner.go:124] ! I0526 21:23:31.898925       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.638587  527485 command_runner.go:124] ! I0526 21:23:31.913046       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.638597  527485 command_runner.go:124] ! I0526 21:23:31.913149       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.638605  527485 command_runner.go:124] ! I0526 21:23:31.925884       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.638614  527485 command_runner.go:124] ! I0526 21:23:31.925994       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.638623  527485 command_runner.go:124] ! I0526 21:23:31.939143       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.638632  527485 command_runner.go:124] ! I0526 21:23:31.939253       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.638640  527485 command_runner.go:124] ! I0526 21:23:31.954393       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.638650  527485 command_runner.go:124] ! I0526 21:23:31.956005       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.638658  527485 command_runner.go:124] ! I0526 21:23:31.964255       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.638670  527485 command_runner.go:124] ! I0526 21:23:31.964369       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.638679  527485 command_runner.go:124] ! I0526 21:23:31.980824       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.638688  527485 command_runner.go:124] ! I0526 21:23:31.980931       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.638697  527485 command_runner.go:124] ! I0526 21:23:31.998875       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.638706  527485 command_runner.go:124] ! I0526 21:23:31.998978       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.638717  527485 command_runner.go:124] ! I0526 21:23:32.014057       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.638726  527485 command_runner.go:124] ! I0526 21:23:32.014169       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.638736  527485 command_runner.go:124] ! I0526 21:23:32.027301       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.638746  527485 command_runner.go:124] ! I0526 21:23:32.027633       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.638755  527485 command_runner.go:124] ! I0526 21:23:32.046160       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.638765  527485 command_runner.go:124] ! I0526 21:23:32.046890       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.638777  527485 command_runner.go:124] ! I0526 21:23:32.068538       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.638786  527485 command_runner.go:124] ! I0526 21:23:32.069814       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.638795  527485 command_runner.go:124] ! I0526 21:23:32.087119       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.638804  527485 command_runner.go:124] ! I0526 21:23:32.087547       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.638813  527485 command_runner.go:124] ! I0526 21:23:32.097832       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.638822  527485 command_runner.go:124] ! I0526 21:23:32.097940       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.638831  527485 command_runner.go:124] ! I0526 21:23:32.107249       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.638841  527485 command_runner.go:124] ! I0526 21:23:32.107932       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.638849  527485 command_runner.go:124] ! I0526 21:23:32.119796       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.638859  527485 command_runner.go:124] ! I0526 21:23:32.119897       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.638868  527485 command_runner.go:124] ! I0526 21:23:32.128209       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.638877  527485 command_runner.go:124] ! I0526 21:23:32.128321       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.638885  527485 command_runner.go:124] ! I0526 21:23:32.138008       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.638895  527485 command_runner.go:124] ! I0526 21:23:32.138111       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.638903  527485 command_runner.go:124] ! I0526 21:23:32.160727       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.638912  527485 command_runner.go:124] ! I0526 21:23:32.160833       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.638922  527485 command_runner.go:124] ! I0526 21:23:32.186843       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.638932  527485 command_runner.go:124] ! I0526 21:23:32.186949       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.638940  527485 command_runner.go:124] ! I0526 21:23:32.198121       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.638950  527485 command_runner.go:124] ! I0526 21:23:32.198232       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.638958  527485 command_runner.go:124] ! I0526 21:23:32.206015       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.638968  527485 command_runner.go:124] ! I0526 21:23:32.206127       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.638976  527485 command_runner.go:124] ! I0526 21:23:32.222761       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.638986  527485 command_runner.go:124] ! I0526 21:23:32.223204       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.638994  527485 command_runner.go:124] ! I0526 21:23:32.232528       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.639003  527485 command_runner.go:124] ! I0526 21:23:32.232629       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.639013  527485 command_runner.go:124] ! I0526 21:23:32.245897       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.639023  527485 command_runner.go:124] ! I0526 21:23:32.246007       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.639033  527485 command_runner.go:124] ! I0526 21:23:32.263847       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.639057  527485 command_runner.go:124] ! I0526 21:23:32.263950       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.639068  527485 command_runner.go:124] ! I0526 21:23:32.275996       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.639078  527485 command_runner.go:124] ! I0526 21:23:32.276100       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.639087  527485 command_runner.go:124] ! I0526 21:23:32.286992       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.639098  527485 command_runner.go:124] ! I0526 21:23:32.288760       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.639106  527485 command_runner.go:124] ! I0526 21:23:32.300558       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.639116  527485 command_runner.go:124] ! I0526 21:23:32.300656       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.639126  527485 command_runner.go:124] ! W0526 21:23:32.466350       1 genericapiserver.go:419] Skipping API batch/v2alpha1 because it has no resources.
	I0526 21:25:10.639135  527485 command_runner.go:124] ! W0526 21:23:32.475974       1 genericapiserver.go:419] Skipping API discovery.k8s.io/v1alpha1 because it has no resources.
	I0526 21:25:10.639146  527485 command_runner.go:124] ! W0526 21:23:32.486620       1 genericapiserver.go:419] Skipping API node.k8s.io/v1alpha1 because it has no resources.
	I0526 21:25:10.639156  527485 command_runner.go:124] ! W0526 21:23:32.495038       1 genericapiserver.go:419] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
	I0526 21:25:10.639167  527485 command_runner.go:124] ! W0526 21:23:32.498634       1 genericapiserver.go:419] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
	I0526 21:25:10.639176  527485 command_runner.go:124] ! W0526 21:23:32.503834       1 genericapiserver.go:419] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
	I0526 21:25:10.639186  527485 command_runner.go:124] ! W0526 21:23:32.506839       1 genericapiserver.go:419] Skipping API flowcontrol.apiserver.k8s.io/v1alpha1 because it has no resources.
	I0526 21:25:10.639210  527485 command_runner.go:124] ! W0526 21:23:32.511920       1 genericapiserver.go:419] Skipping API apps/v1beta2 because it has no resources.
	I0526 21:25:10.639221  527485 command_runner.go:124] ! W0526 21:23:32.512155       1 genericapiserver.go:419] Skipping API apps/v1beta1 because it has no resources.
	I0526 21:25:10.639240  527485 command_runner.go:124] ! I0526 21:23:32.520325       1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0526 21:25:10.639260  527485 command_runner.go:124] ! I0526 21:23:32.520699       1 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
	I0526 21:25:10.639270  527485 command_runner.go:124] ! I0526 21:23:32.522294       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.639281  527485 command_runner.go:124] ! I0526 21:23:32.522675       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.639289  527485 command_runner.go:124] ! I0526 21:23:32.531035       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.639298  527485 command_runner.go:124] ! I0526 21:23:32.531144       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.639309  527485 command_runner.go:124] ! I0526 21:23:34.690784       1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
	I0526 21:25:10.639318  527485 command_runner.go:124] ! I0526 21:23:34.691285       1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	I0526 21:25:10.639331  527485 command_runner.go:124] ! I0526 21:23:34.692130       1 dynamic_serving_content.go:130] Starting serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key
	I0526 21:25:10.639342  527485 command_runner.go:124] ! I0526 21:23:34.692740       1 secure_serving.go:197] Serving securely on [::]:8443
	I0526 21:25:10.639352  527485 command_runner.go:124] ! I0526 21:23:34.693343       1 apf_controller.go:261] Starting API Priority and Fairness config controller
	I0526 21:25:10.639360  527485 command_runner.go:124] ! I0526 21:23:34.693677       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I0526 21:25:10.639369  527485 command_runner.go:124] ! I0526 21:23:34.694744       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
	I0526 21:25:10.639380  527485 command_runner.go:124] ! I0526 21:23:34.694836       1 shared_informer.go:240] Waiting for caches to sync for cluster_authentication_trust_controller
	I0526 21:25:10.639388  527485 command_runner.go:124] ! I0526 21:23:34.694880       1 available_controller.go:475] Starting AvailableConditionController
	I0526 21:25:10.639397  527485 command_runner.go:124] ! I0526 21:23:34.694885       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
	I0526 21:25:10.639408  527485 command_runner.go:124] ! I0526 21:23:34.694904       1 autoregister_controller.go:141] Starting autoregister controller
	I0526 21:25:10.639419  527485 command_runner.go:124] ! I0526 21:23:34.694908       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0526 21:25:10.639428  527485 command_runner.go:124] ! I0526 21:23:34.696887       1 apiservice_controller.go:97] Starting APIServiceRegistrationController
	I0526 21:25:10.639437  527485 command_runner.go:124] ! I0526 21:23:34.697053       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I0526 21:25:10.639455  527485 command_runner.go:124] ! I0526 21:23:34.697670       1 dynamic_serving_content.go:130] Starting aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key
	I0526 21:25:10.639464  527485 command_runner.go:124] ! I0526 21:23:34.697935       1 controller.go:83] Starting OpenAPI AggregationController
	I0526 21:25:10.639472  527485 command_runner.go:124] ! I0526 21:23:34.698627       1 customresource_discovery_controller.go:209] Starting DiscoveryController
	I0526 21:25:10.639482  527485 command_runner.go:124] ! I0526 21:23:34.705120       1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	I0526 21:25:10.639492  527485 command_runner.go:124] ! I0526 21:23:34.705289       1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
	I0526 21:25:10.639502  527485 command_runner.go:124] ! I0526 21:23:34.706119       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0526 21:25:10.639511  527485 command_runner.go:124] ! I0526 21:23:34.706246       1 shared_informer.go:240] Waiting for caches to sync for crd-autoregister
	I0526 21:25:10.639527  527485 command_runner.go:124] ! E0526 21:23:34.733148       1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.39.229, ResourceVersion: 0, AdditionalErrorMsg: 
	I0526 21:25:10.639535  527485 command_runner.go:124] ! I0526 21:23:34.762565       1 controller.go:86] Starting OpenAPI controller
	I0526 21:25:10.639543  527485 command_runner.go:124] ! I0526 21:23:34.762983       1 naming_controller.go:291] Starting NamingConditionController
	I0526 21:25:10.639553  527485 command_runner.go:124] ! I0526 21:23:34.763230       1 establishing_controller.go:76] Starting EstablishingController
	I0526 21:25:10.639561  527485 command_runner.go:124] ! I0526 21:23:34.763815       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0526 21:25:10.639572  527485 command_runner.go:124] ! I0526 21:23:34.764676       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0526 21:25:10.639581  527485 command_runner.go:124] ! I0526 21:23:34.765003       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0526 21:25:10.639588  527485 command_runner.go:124] ! I0526 21:23:34.894833       1 shared_informer.go:247] Caches are synced for node_authorizer 
	I0526 21:25:10.639597  527485 command_runner.go:124] ! I0526 21:23:34.895159       1 cache.go:39] Caches are synced for autoregister controller
	I0526 21:25:10.639605  527485 command_runner.go:124] ! I0526 21:23:34.895543       1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller 
	I0526 21:25:10.639615  527485 command_runner.go:124] ! I0526 21:23:34.895893       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0526 21:25:10.639624  527485 command_runner.go:124] ! I0526 21:23:34.897085       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0526 21:25:10.639632  527485 command_runner.go:124] ! I0526 21:23:34.899871       1 apf_controller.go:266] Running API Priority and Fairness config worker
	I0526 21:25:10.639640  527485 command_runner.go:124] ! I0526 21:23:34.907242       1 shared_informer.go:247] Caches are synced for crd-autoregister 
	I0526 21:25:10.639647  527485 command_runner.go:124] ! I0526 21:23:35.022751       1 controller.go:609] quota admission added evaluator for: namespaces
	I0526 21:25:10.639660  527485 command_runner.go:124] ! I0526 21:23:35.690855       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0526 21:25:10.639673  527485 command_runner.go:124] ! I0526 21:23:35.691097       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0526 21:25:10.639685  527485 command_runner.go:124] ! I0526 21:23:35.708402       1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000
	I0526 21:25:10.639694  527485 command_runner.go:124] ! I0526 21:23:35.726885       1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
	I0526 21:25:10.639704  527485 command_runner.go:124] ! I0526 21:23:35.727088       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I0526 21:25:10.639712  527485 command_runner.go:124] ! I0526 21:23:36.334571       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0526 21:25:10.639721  527485 command_runner.go:124] ! I0526 21:23:36.389004       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0526 21:25:10.639729  527485 command_runner.go:124] ! W0526 21:23:36.485873       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.39.229]
	I0526 21:25:10.639738  527485 command_runner.go:124] ! I0526 21:23:36.487435       1 controller.go:609] quota admission added evaluator for: endpoints
	I0526 21:25:10.639747  527485 command_runner.go:124] ! I0526 21:23:36.499209       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0526 21:25:10.639756  527485 command_runner.go:124] ! I0526 21:23:37.294654       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I0526 21:25:10.639763  527485 command_runner.go:124] ! I0526 21:23:38.382157       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I0526 21:25:10.639777  527485 command_runner.go:124] ! I0526 21:23:38.454712       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I0526 21:25:10.639788  527485 command_runner.go:124] ! I0526 21:23:43.955877       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I0526 21:25:10.639796  527485 command_runner.go:124] ! I0526 21:23:53.285833       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I0526 21:25:10.639806  527485 command_runner.go:124] ! I0526 21:23:53.338274       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I0526 21:25:10.639813  527485 command_runner.go:124] ! I0526 21:24:01.973387       1 client.go:360] parsed scheme: "passthrough"
	I0526 21:25:10.639823  527485 command_runner.go:124] ! I0526 21:24:01.973608       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0526 21:25:10.639833  527485 command_runner.go:124] ! I0526 21:24:01.973627       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0526 21:25:10.639839  527485 command_runner.go:124] ! I0526 21:24:43.497572       1 client.go:360] parsed scheme: "passthrough"
	I0526 21:25:10.639850  527485 command_runner.go:124] ! I0526 21:24:43.497775       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0526 21:25:10.639859  527485 command_runner.go:124] ! I0526 21:24:43.498072       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0526 21:25:10.650003  527485 logs.go:123] Gathering logs for etcd [c8538106e966bebc2d596c1688fdb11c413a37e122bd18c454b4db5bec7f55ad] ...
	I0526 21:25:10.650018  527485 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 c8538106e966bebc2d596c1688fdb11c413a37e122bd18c454b4db5bec7f55ad"
	I0526 21:25:10.671546  527485 command_runner.go:124] ! [WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead
	I0526 21:25:10.671564  527485 command_runner.go:124] ! 2021-05-26 21:23:30.145280 I | etcdmain: etcd Version: 3.4.13
	I0526 21:25:10.671571  527485 command_runner.go:124] ! 2021-05-26 21:23:30.146007 I | etcdmain: Git SHA: ae9734ed2
	I0526 21:25:10.671578  527485 command_runner.go:124] ! 2021-05-26 21:23:30.146359 I | etcdmain: Go Version: go1.12.17
	I0526 21:25:10.671590  527485 command_runner.go:124] ! 2021-05-26 21:23:30.146935 I | etcdmain: Go OS/Arch: linux/amd64
	I0526 21:25:10.671599  527485 command_runner.go:124] ! 2021-05-26 21:23:30.147549 I | etcdmain: setting maximum number of CPUs to 2, total number of available CPUs is 2
	I0526 21:25:10.671608  527485 command_runner.go:124] ! [WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead
	I0526 21:25:10.671622  527485 command_runner.go:124] ! 2021-05-26 21:23:30.148927 I | embed: peerTLS: cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	I0526 21:25:10.671632  527485 command_runner.go:124] ! 2021-05-26 21:23:30.159191 I | embed: name = multinode-20210526212238-510955
	I0526 21:25:10.671639  527485 command_runner.go:124] ! 2021-05-26 21:23:30.159781 I | embed: data dir = /var/lib/minikube/etcd
	I0526 21:25:10.671646  527485 command_runner.go:124] ! 2021-05-26 21:23:30.161368 I | embed: member dir = /var/lib/minikube/etcd/member
	I0526 21:25:10.671657  527485 command_runner.go:124] ! 2021-05-26 21:23:30.161781 I | embed: heartbeat = 100ms
	I0526 21:25:10.671665  527485 command_runner.go:124] ! 2021-05-26 21:23:30.162024 I | embed: election = 1000ms
	I0526 21:25:10.671671  527485 command_runner.go:124] ! 2021-05-26 21:23:30.162419 I | embed: snapshot count = 10000
	I0526 21:25:10.671680  527485 command_runner.go:124] ! 2021-05-26 21:23:30.162834 I | embed: advertise client URLs = https://192.168.39.229:2379
	I0526 21:25:10.671688  527485 command_runner.go:124] ! 2021-05-26 21:23:30.186657 I | etcdserver: starting member b8647f2870156d71 in cluster 2bfbf13ce68722b
	I0526 21:25:10.671695  527485 command_runner.go:124] ! raft2021/05/26 21:23:30 INFO: b8647f2870156d71 switched to configuration voters=()
	I0526 21:25:10.671702  527485 command_runner.go:124] ! raft2021/05/26 21:23:30 INFO: b8647f2870156d71 became follower at term 0
	I0526 21:25:10.671712  527485 command_runner.go:124] ! raft2021/05/26 21:23:30 INFO: newRaft b8647f2870156d71 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	I0526 21:25:10.671718  527485 command_runner.go:124] ! raft2021/05/26 21:23:30 INFO: b8647f2870156d71 became follower at term 1
	I0526 21:25:10.671726  527485 command_runner.go:124] ! raft2021/05/26 21:23:30 INFO: b8647f2870156d71 switched to configuration voters=(13286884612305677681)
	I0526 21:25:10.671735  527485 command_runner.go:124] ! 2021-05-26 21:23:30.205555 W | auth: simple token is not cryptographically signed
	I0526 21:25:10.671745  527485 command_runner.go:124] ! 2021-05-26 21:23:30.234208 I | etcdserver: starting server... [version: 3.4.13, cluster version: to_be_decided]
	I0526 21:25:10.671764  527485 command_runner.go:124] ! 2021-05-26 21:23:30.243414 I | etcdserver: b8647f2870156d71 as single-node; fast-forwarding 9 ticks (election ticks 10)
	I0526 21:25:10.671777  527485 command_runner.go:124] ! raft2021/05/26 21:23:30 INFO: b8647f2870156d71 switched to configuration voters=(13286884612305677681)
	I0526 21:25:10.671789  527485 command_runner.go:124] ! 2021-05-26 21:23:30.255082 I | etcdserver/membership: added member b8647f2870156d71 [https://192.168.39.229:2380] to cluster 2bfbf13ce68722b
	I0526 21:25:10.671802  527485 command_runner.go:124] ! 2021-05-26 21:23:30.261097 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	I0526 21:25:10.671813  527485 command_runner.go:124] ! 2021-05-26 21:23:30.264526 I | embed: listening for peers on 192.168.39.229:2380
	I0526 21:25:10.671822  527485 command_runner.go:124] ! 2021-05-26 21:23:30.264701 I | embed: listening for metrics on http://127.0.0.1:2381
	I0526 21:25:10.671833  527485 command_runner.go:124] ! raft2021/05/26 21:23:30 INFO: b8647f2870156d71 is starting a new election at term 1
	I0526 21:25:10.671849  527485 command_runner.go:124] ! raft2021/05/26 21:23:30 INFO: b8647f2870156d71 became candidate at term 2
	I0526 21:25:10.671863  527485 command_runner.go:124] ! raft2021/05/26 21:23:30 INFO: b8647f2870156d71 received MsgVoteResp from b8647f2870156d71 at term 2
	I0526 21:25:10.671873  527485 command_runner.go:124] ! raft2021/05/26 21:23:30 INFO: b8647f2870156d71 became leader at term 2
	I0526 21:25:10.671880  527485 command_runner.go:124] ! raft2021/05/26 21:23:30 INFO: raft.node: b8647f2870156d71 elected leader b8647f2870156d71 at term 2
	I0526 21:25:10.671890  527485 command_runner.go:124] ! 2021-05-26 21:23:30.893688 I | etcdserver: setting up the initial cluster version to 3.4
	I0526 21:25:10.671898  527485 command_runner.go:124] ! 2021-05-26 21:23:30.897562 I | embed: ready to serve client requests
	I0526 21:25:10.671911  527485 command_runner.go:124] ! 2021-05-26 21:23:30.897893 I | etcdserver: published {Name:multinode-20210526212238-510955 ClientURLs:[https://192.168.39.229:2379]} to cluster 2bfbf13ce68722b
	I0526 21:25:10.671924  527485 command_runner.go:124] ! 2021-05-26 21:23:30.898097 I | embed: ready to serve client requests
	I0526 21:25:10.671937  527485 command_runner.go:124] ! 2021-05-26 21:23:30.904911 I | embed: serving client requests on 127.0.0.1:2379
	I0526 21:25:10.671951  527485 command_runner.go:124] ! 2021-05-26 21:23:30.925406 I | embed: serving client requests on 192.168.39.229:2379
	I0526 21:25:10.671961  527485 command_runner.go:124] ! 2021-05-26 21:23:30.930764 N | etcdserver/membership: set the initial cluster version to 3.4
	I0526 21:25:10.671969  527485 command_runner.go:124] ! 2021-05-26 21:23:30.973015 I | etcdserver/api: enabled capabilities for version 3.4
	I0526 21:25:10.671984  527485 command_runner.go:124] ! 2021-05-26 21:23:35.005110 W | etcdserver: read-only range request "key:\"/registry/ranges/servicenodeports\" " with result "range_response_count:0 size:4" took too long (158.136927ms) to execute
	I0526 21:25:10.672004  527485 command_runner.go:124] ! 2021-05-26 21:23:35.008540 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/etcd-multinode-20210526212238-510955\" " with result "range_response_count:0 size:4" took too long (159.3133ms) to execute
	I0526 21:25:10.672026  527485 command_runner.go:124] ! 2021-05-26 21:23:35.012635 W | etcdserver: read-only range request "key:\"/registry/namespaces/kube-system\" " with result "range_response_count:0 size:4" took too long (107.936302ms) to execute
	I0526 21:25:10.672068  527485 command_runner.go:124] ! 2021-05-26 21:23:35.013064 W | etcdserver: read-only range request "key:\"/registry/csinodes/multinode-20210526212238-510955\" " with result "range_response_count:0 size:4" took too long (148.811077ms) to execute
	I0526 21:25:10.672080  527485 command_runner.go:124] ! 2021-05-26 21:23:35.013577 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:4" took too long (157.477156ms) to execute
	I0526 21:25:10.672094  527485 command_runner.go:124] ! 2021-05-26 21:23:48.034379 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	I0526 21:25:10.672107  527485 command_runner.go:124] ! 2021-05-26 21:23:50.916831 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	I0526 21:25:10.672121  527485 command_runner.go:124] ! 2021-05-26 21:24:00.917857 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	I0526 21:25:10.672135  527485 command_runner.go:124] ! 2021-05-26 21:24:10.918220 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	I0526 21:25:10.672148  527485 command_runner.go:124] ! 2021-05-26 21:24:20.917896 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	I0526 21:25:10.672160  527485 command_runner.go:124] ! 2021-05-26 21:24:30.916918 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	I0526 21:25:10.672169  527485 command_runner.go:124] ! 2021-05-26 21:24:40.917190 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	I0526 21:25:10.672185  527485 command_runner.go:124] ! 2021-05-26 21:24:50.917237 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	I0526 21:25:10.672202  527485 command_runner.go:124] ! 2021-05-26 21:25:00.916673 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	I0526 21:25:10.675785  527485 logs.go:123] Gathering logs for coredns [a9593dff4428d4d0f5cc31832cefb42bf39b62a98d32d295dd9502a1ef6d307a] ...
	I0526 21:25:10.675801  527485 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 a9593dff4428d4d0f5cc31832cefb42bf39b62a98d32d295dd9502a1ef6d307a"
	I0526 21:25:10.692545  527485 command_runner.go:124] > .:53
	I0526 21:25:10.692677  527485 command_runner.go:124] > [INFO] plugin/reload: Running configuration MD5 = 8f51b271a18f2ce6fcaee5f1cfda3ed0
	I0526 21:25:10.693108  527485 command_runner.go:124] > CoreDNS-1.7.0
	I0526 21:25:10.693188  527485 command_runner.go:124] > linux/amd64, go1.14.4, f59c03d
	I0526 21:25:10.694231  527485 logs.go:123] Gathering logs for kube-scheduler [e6bb9bee7539aa8f41eb665706dd077c17b8c28a54090cf4fffe439978194c08] ...
	I0526 21:25:10.694248  527485 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 e6bb9bee7539aa8f41eb665706dd077c17b8c28a54090cf4fffe439978194c08"
	I0526 21:25:10.710551  527485 command_runner.go:124] ! I0526 21:23:31.228401       1 serving.go:331] Generated self-signed cert in-memory
	I0526 21:25:10.710587  527485 command_runner.go:124] ! W0526 21:23:34.792981       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0526 21:25:10.710611  527485 command_runner.go:124] ! W0526 21:23:34.795544       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0526 21:25:10.710629  527485 command_runner.go:124] ! W0526 21:23:34.796410       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0526 21:25:10.710640  527485 command_runner.go:124] ! W0526 21:23:34.796897       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0526 21:25:10.710657  527485 command_runner.go:124] ! I0526 21:23:34.861412       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0526 21:25:10.710678  527485 command_runner.go:124] ! I0526 21:23:34.862415       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0526 21:25:10.710691  527485 command_runner.go:124] ! I0526 21:23:34.861578       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0526 21:25:10.710699  527485 command_runner.go:124] ! I0526 21:23:34.861594       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I0526 21:25:10.710720  527485 command_runner.go:124] ! E0526 21:23:34.865256       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0526 21:25:10.710751  527485 command_runner.go:124] ! E0526 21:23:34.871182       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0526 21:25:10.710780  527485 command_runner.go:124] ! E0526 21:23:34.871367       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0526 21:25:10.710811  527485 command_runner.go:124] ! E0526 21:23:34.871423       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0526 21:25:10.710835  527485 command_runner.go:124] ! E0526 21:23:34.873602       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0526 21:25:10.710862  527485 command_runner.go:124] ! E0526 21:23:34.873877       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0526 21:25:10.710892  527485 command_runner.go:124] ! E0526 21:23:34.874313       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0526 21:25:10.710912  527485 command_runner.go:124] ! E0526 21:23:34.874540       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0526 21:25:10.710961  527485 command_runner.go:124] ! E0526 21:23:34.875162       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0526 21:25:10.710987  527485 command_runner.go:124] ! E0526 21:23:34.875282       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0526 21:25:10.711011  527485 command_runner.go:124] ! E0526 21:23:34.878224       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0526 21:25:10.711037  527485 command_runner.go:124] ! E0526 21:23:34.878386       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0526 21:25:10.711065  527485 command_runner.go:124] ! E0526 21:23:35.699206       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0526 21:25:10.711085  527485 command_runner.go:124] ! E0526 21:23:35.756603       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0526 21:25:10.711116  527485 command_runner.go:124] ! E0526 21:23:35.804897       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0526 21:25:10.711175  527485 command_runner.go:124] ! E0526 21:23:35.812802       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0526 21:25:10.711212  527485 command_runner.go:124] ! E0526 21:23:35.981887       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0526 21:25:10.711241  527485 command_runner.go:124] ! E0526 21:23:36.079577       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0526 21:25:10.711260  527485 command_runner.go:124] ! I0526 21:23:38.862952       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	I0526 21:25:10.715541  527485 logs.go:123] Gathering logs for kube-proxy [de6efc6fec4b20903efb849817a9e57e284841a9fc6ef9bd1b83e563316cdaa2] ...
	I0526 21:25:10.715557  527485 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 de6efc6fec4b20903efb849817a9e57e284841a9fc6ef9bd1b83e563316cdaa2"
	I0526 21:25:10.734573  527485 command_runner.go:124] ! I0526 21:23:54.629702       1 node.go:172] Successfully retrieved node IP: 192.168.39.229
	I0526 21:25:10.734591  527485 command_runner.go:124] ! I0526 21:23:54.629813       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.39.229), assume IPv4 operation
	I0526 21:25:10.734600  527485 command_runner.go:124] ! W0526 21:23:54.677087       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0526 21:25:10.734607  527485 command_runner.go:124] ! I0526 21:23:54.677377       1 server_others.go:185] Using iptables Proxier.
	I0526 21:25:10.734613  527485 command_runner.go:124] ! I0526 21:23:54.678139       1 server.go:650] Version: v1.20.2
	I0526 21:25:10.734624  527485 command_runner.go:124] ! I0526 21:23:54.678560       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
	I0526 21:25:10.734635  527485 command_runner.go:124] ! I0526 21:23:54.678810       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
	I0526 21:25:10.734643  527485 command_runner.go:124] ! I0526 21:23:54.680271       1 config.go:315] Starting service config controller
	I0526 21:25:10.734653  527485 command_runner.go:124] ! I0526 21:23:54.680366       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0526 21:25:10.734661  527485 command_runner.go:124] ! I0526 21:23:54.680391       1 config.go:224] Starting endpoint slice config controller
	I0526 21:25:10.734671  527485 command_runner.go:124] ! I0526 21:23:54.680396       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0526 21:25:10.734679  527485 command_runner.go:124] ! I0526 21:23:54.780835       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0526 21:25:10.734686  527485 command_runner.go:124] ! I0526 21:23:54.780955       1 shared_informer.go:247] Caches are synced for service config 
	I0526 21:25:10.735203  527485 logs.go:123] Gathering logs for storage-provisioner [5d3df8c94eaedafbc7bb1e79012cbc72bc2ec88687f4bfc6f10fff8c1eb3107d] ...
	I0526 21:25:10.735216  527485 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 5d3df8c94eaedafbc7bb1e79012cbc72bc2ec88687f4bfc6f10fff8c1eb3107d"
	I0526 21:25:10.754134  527485 command_runner.go:124] ! I0526 21:24:10.174152       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0526 21:25:10.754160  527485 command_runner.go:124] ! I0526 21:24:10.283423       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0526 21:25:10.754176  527485 command_runner.go:124] ! I0526 21:24:10.285296       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0526 21:25:10.754191  527485 command_runner.go:124] ! I0526 21:24:10.325709       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0526 21:25:10.754214  527485 command_runner.go:124] ! I0526 21:24:10.333080       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_multinode-20210526212238-510955_640f1575-3f2b-423b-9f51-48a3198dc1b4!
	I0526 21:25:10.754248  527485 command_runner.go:124] ! I0526 21:24:10.329407       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"694e5be2-46cf-4c76-aeac-70628468e6a3", APIVersion:"v1", ResourceVersion:"496", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' multinode-20210526212238-510955_640f1575-3f2b-423b-9f51-48a3198dc1b4 became leader
	I0526 21:25:10.754282  527485 command_runner.go:124] ! I0526 21:24:10.440994       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_multinode-20210526212238-510955_640f1575-3f2b-423b-9f51-48a3198dc1b4!
	I0526 21:25:10.755037  527485 logs.go:123] Gathering logs for containerd ...
	I0526 21:25:10.755055  527485 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0526 21:25:10.769442  527485 command_runner.go:124] > -- Logs begin at Wed 2021-05-26 21:22:49 UTC, end at Wed 2021-05-26 21:25:10 UTC. --
	I0526 21:25:10.769468  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 systemd[1]: Starting containerd container runtime...
	I0526 21:25:10.769480  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 systemd[1]: Started containerd container runtime.
	I0526 21:25:10.769499  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.412639957Z" level=info msg="starting containerd" revision=05f951a3781f4f2c1911b05e61c160e9c30eaa8e version=v1.4.4
	I0526 21:25:10.769524  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.454795725Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0526 21:25:10.769544  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.455022736Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0526 21:25:10.769579  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.456819758Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/4.19.182\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0526 21:25:10.769602  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.456940685Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0526 21:25:10.769632  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.457199432Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0526 21:25:10.769665  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.457299817Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0526 21:25:10.769687  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.457342626Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0526 21:25:10.769708  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.457353348Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0526 21:25:10.769731  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.457375564Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0526 21:25:10.769752  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.457518971Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0526 21:25:10.769784  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.457752665Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0526 21:25:10.769804  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.457768067Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0526 21:25:10.769826  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.457801760Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0526 21:25:10.769844  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.457811694Z" level=info msg="metadata content store policy set" policy=shared
	I0526 21:25:10.769869  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.461742670Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0526 21:25:10.769900  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.461851430Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0526 21:25:10.769923  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462036878Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0526 21:25:10.769945  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462069131Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0526 21:25:10.769964  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462082171Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0526 21:25:10.769980  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462094524Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0526 21:25:10.769998  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462115116Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0526 21:25:10.770015  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462127721Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1
	I0526 21:25:10.770031  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462139766Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0526 21:25:10.770046  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462157542Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0526 21:25:10.770064  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462167923Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0526 21:25:10.770077  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462295610Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0526 21:25:10.770092  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462357720Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0526 21:25:10.770106  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462745295Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0526 21:25:10.770118  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462770123Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0526 21:25:10.770132  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462815565Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0526 21:25:10.770147  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462827921Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0526 21:25:10.770162  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462846347Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0526 21:25:10.770175  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462857513Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0526 21:25:10.770189  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462870788Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0526 21:25:10.770205  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462881154Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0526 21:25:10.770562  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462892049Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0526 21:25:10.770614  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462903002Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0526 21:25:10.770642  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462913917Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0526 21:25:10.770670  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462958764Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0526 21:25:10.770698  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462972025Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0526 21:25:10.770728  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462983386Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0526 21:25:10.770754  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462994704Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1
	I0526 21:25:10.770787  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.463133131Z" level=warning msg="failed to load plugin io.containerd.grpc.v1.cri" error="invalid plugin config: `systemd_cgroup` only works for runtime io.containerd.runtime.v1.linux"
	I0526 21:25:10.770814  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.463145276Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0526 21:25:10.770833  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.463363744Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	I0526 21:25:10.770858  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.463401676Z" level=info msg=serving... address=/run/containerd/containerd.sock
	I0526 21:25:10.770882  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.463415404Z" level=info msg="containerd successfully booted in 0.052163s"
	I0526 21:25:10.770900  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 systemd[1]: Stopping containerd container runtime...
	I0526 21:25:10.770913  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 systemd[1]: containerd.service: Succeeded.
	I0526 21:25:10.770931  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 systemd[1]: Stopped containerd container runtime.
	I0526 21:25:10.770944  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 systemd[1]: Starting containerd container runtime...
	I0526 21:25:10.770962  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 systemd[1]: Started containerd container runtime.
	I0526 21:25:10.770983  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.677351233Z" level=info msg="starting containerd" revision=05f951a3781f4f2c1911b05e61c160e9c30eaa8e version=v1.4.4
	I0526 21:25:10.771013  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.703735354Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0526 21:25:10.771044  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.703939180Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0526 21:25:10.771090  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.706070962Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/4.19.182\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0526 21:25:10.771127  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.706222939Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0526 21:25:10.771169  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.706683734Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0526 21:25:10.771197  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.706837938Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0526 21:25:10.771225  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.706963959Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0526 21:25:10.771253  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.707081760Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0526 21:25:10.771281  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.707216688Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0526 21:25:10.771315  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.707381113Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0526 21:25:10.771361  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.707841019Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0526 21:25:10.771389  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.707973506Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0526 21:25:10.771416  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.708095816Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0526 21:25:10.771434  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.708236930Z" level=info msg="metadata content store policy set" policy=shared
	I0526 21:25:10.771465  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.708536776Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0526 21:25:10.771489  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.708698510Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0526 21:25:10.771519  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.708937323Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0526 21:25:10.771545  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.709074999Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0526 21:25:10.771624  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.709196994Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0526 21:25:10.771671  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.709315424Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0526 21:25:10.771707  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.709506686Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0526 21:25:10.771744  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.709629192Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1
	I0526 21:25:10.771862  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.709743913Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0526 21:25:10.771923  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.709857985Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0526 21:25:10.771952  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.709979410Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0526 21:25:10.771979  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.710125076Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0526 21:25:10.772005  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.710271949Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0526 21:25:10.772249  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.710830775Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0526 21:25:10.772284  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.710974791Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0526 21:25:10.772304  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.711117145Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0526 21:25:10.772322  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.711243334Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0526 21:25:10.772337  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.711363735Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0526 21:25:10.772351  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.711549081Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0526 21:25:10.772363  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.711666234Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0526 21:25:10.772377  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.711781506Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0526 21:25:10.772389  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.711895813Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0526 21:25:10.772402  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.712013139Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0526 21:25:10.772415  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.712131897Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0526 21:25:10.772427  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.712269473Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0526 21:25:10.772444  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.712503525Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0526 21:25:10.772456  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.712659007Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0526 21:25:10.772512  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.712779064Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1
	I0526 21:25:10.772533  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.712986218Z" level=warning msg="`default_runtime` is deprecated, please use `default_runtime_name` to reference the default configuration you have defined in `runtimes`"
	I0526 21:25:10.772638  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.713141331Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:default DefaultRuntime:{Type:io.containerd.runc.v2 Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:0xc000155fb0 PrivilegedWithoutHostDevices:false BaseRuntimeSpec:} UntrustedWorkloadRuntime:{Type: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:<nil> PrivilegedWithoutHostDevices:false BaseRuntimeSpec:} Runtimes:map[default:{Type:io.containerd.runc.v2 Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:0xc000155fb0 PrivilegedWithoutHostDevices:false BaseRuntimeSpec:} runc:{Type:io.containerd.runc.v2 Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:0xc00037b050 PrivilegedWithoutHostDevices:false BaseRuntimeSpec:}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpac
kedLayers:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.mk NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate:} Registry:{Mirrors:map[docker.io:{Endpoints:[https://registry-1.docker.io]}] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:} DisableTCPService:true StreamServerAddress: StreamServerPort:10010 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:k8s.gcr.io/pause:3.2 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true IgnoreImageDefinedVolumes:false} ContainerdRootDir:/mnt/vda1/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/mnt/vda1/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/r
un/containerd/io.containerd.grpc.v1.cri}"
	I0526 21:25:10.772659  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.713322225Z" level=info msg="Connect containerd service"
	I0526 21:25:10.772672  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.713538361Z" level=info msg="Get image filesystem path \"/mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\""
	I0526 21:25:10.772693  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.714213931Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.mk: cni plugin not initialized: failed to load cni config"
	I0526 21:25:10.772708  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.714359921Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0526 21:25:10.772723  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.714868242Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	I0526 21:25:10.772734  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.715023618Z" level=info msg=serving... address=/run/containerd/containerd.sock
	I0526 21:25:10.772747  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.715142631Z" level=info msg="containerd successfully booted in 0.038760s"
	I0526 21:25:10.772760  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.726087774Z" level=info msg="Start subscribing containerd event"
	I0526 21:25:10.772778  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.726733995Z" level=info msg="Start recovering state"
	I0526 21:25:10.772791  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.781395051Z" level=info msg="Start event monitor"
	I0526 21:25:10.772803  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.781771001Z" level=info msg="Start snapshots syncer"
	I0526 21:25:10.772816  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.781893491Z" level=info msg="Start cni network conf syncer"
	I0526 21:25:10.772828  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.782003464Z" level=info msg="Start streaming server"
	I0526 21:25:10.772848  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:28.484581294Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:kube-controller-manager-multinode-20210526212238-510955,Uid:474c55dfb64741cc485e46b6bb9f2dc0,Namespace:kube-system,Attempt:0,}"
	I0526 21:25:10.772886  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:28.490843770Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:kube-scheduler-multinode-20210526212238-510955,Uid:6b4a0ee8b3d15a1c2e47c15d32e6eb0d,Namespace:kube-system,Attempt:0,}"
	I0526 21:25:10.772916  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:28.501056680Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:kube-apiserver-multinode-20210526212238-510955,Uid:b42b6879229f245abab6047de8662a2f,Namespace:kube-system,Attempt:0,}"
	I0526 21:25:10.772936  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:28.508591647Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:etcd-multinode-20210526212238-510955,Uid:34530b4d5ce1b17919f3b8976b2d0456,Namespace:kube-system,Attempt:0,}"
	I0526 21:25:10.772955  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:28.580716340Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fe43674906f2080850da99c25995a18c2583bfda5a6a21d58f51cb45f673d486 pid=2407
	I0526 21:25:10.772975  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:28.598809833Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/24fd8b8599a6ee5e09c19d4ce15908360ea29418f2bbe3b0ba2d12f73a3519fb pid=2435
	I0526 21:25:10.773004  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:28.602060491Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/73ada73fbbf0b2a7b4a40791347e9a5a366e1f52a347203f20a27bcb2813b6c5 pid=2434
	I0526 21:25:10.773028  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:28.602007310Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2ad404c6a9c449ae1ebfab12355673229979a8ee4cf4d87f94b5ca073d31b43e pid=2452
	I0526 21:25:10.773050  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:29.066808539Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-multinode-20210526212238-510955,Uid:b42b6879229f245abab6047de8662a2f,Namespace:kube-system,Attempt:0,} returns sandbox id \"fe43674906f2080850da99c25995a18c2583bfda5a6a21d58f51cb45f673d486\""
	I0526 21:25:10.773076  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:29.074803022Z" level=info msg="CreateContainer within sandbox \"fe43674906f2080850da99c25995a18c2583bfda5a6a21d58f51cb45f673d486\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}"
	I0526 21:25:10.773109  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:29.194718464Z" level=info msg="CreateContainer within sandbox \"fe43674906f2080850da99c25995a18c2583bfda5a6a21d58f51cb45f673d486\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a0581c0e5409bf38a71c61d4cf776a7fcd3dc38c24a5a6a52165552afd2bc85c\""
	I0526 21:25:10.773129  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:29.196219933Z" level=info msg="StartContainer for \"a0581c0e5409bf38a71c61d4cf776a7fcd3dc38c24a5a6a52165552afd2bc85c\""
	I0526 21:25:10.773160  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:29.262678371Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-multinode-20210526212238-510955,Uid:474c55dfb64741cc485e46b6bb9f2dc0,Namespace:kube-system,Attempt:0,} returns sandbox id \"73ada73fbbf0b2a7b4a40791347e9a5a366e1f52a347203f20a27bcb2813b6c5\""
	I0526 21:25:10.773188  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:29.272571919Z" level=info msg="CreateContainer within sandbox \"73ada73fbbf0b2a7b4a40791347e9a5a366e1f52a347203f20a27bcb2813b6c5\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}"
	I0526 21:25:10.773220  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:29.347228547Z" level=info msg="CreateContainer within sandbox \"73ada73fbbf0b2a7b4a40791347e9a5a366e1f52a347203f20a27bcb2813b6c5\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"2314e41b1b44395e7b5a7c32d6859941d4a5db601171f0dd18ddefb571d25f18\""
	I0526 21:25:10.773243  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:29.349365690Z" level=info msg="StartContainer for \"2314e41b1b44395e7b5a7c32d6859941d4a5db601171f0dd18ddefb571d25f18\""
	I0526 21:25:10.773267  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:29.419043703Z" level=info msg="StartContainer for \"a0581c0e5409bf38a71c61d4cf776a7fcd3dc38c24a5a6a52165552afd2bc85c\" returns successfully"
	I0526 21:25:10.773304  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:29.520520792Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-multinode-20210526212238-510955,Uid:6b4a0ee8b3d15a1c2e47c15d32e6eb0d,Namespace:kube-system,Attempt:0,} returns sandbox id \"24fd8b8599a6ee5e09c19d4ce15908360ea29418f2bbe3b0ba2d12f73a3519fb\""
	I0526 21:25:10.773328  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:29.527415671Z" level=info msg="CreateContainer within sandbox \"24fd8b8599a6ee5e09c19d4ce15908360ea29418f2bbe3b0ba2d12f73a3519fb\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}"
	I0526 21:25:10.773361  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:29.566421321Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:etcd-multinode-20210526212238-510955,Uid:34530b4d5ce1b17919f3b8976b2d0456,Namespace:kube-system,Attempt:0,} returns sandbox id \"2ad404c6a9c449ae1ebfab12355673229979a8ee4cf4d87f94b5ca073d31b43e\""
	I0526 21:25:10.773388  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:29.575850717Z" level=info msg="CreateContainer within sandbox \"2ad404c6a9c449ae1ebfab12355673229979a8ee4cf4d87f94b5ca073d31b43e\" for container &ContainerMetadata{Name:etcd,Attempt:0,}"
	I0526 21:25:10.773419  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:29.621335319Z" level=info msg="CreateContainer within sandbox \"24fd8b8599a6ee5e09c19d4ce15908360ea29418f2bbe3b0ba2d12f73a3519fb\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e6bb9bee7539aa8f41eb665706dd077c17b8c28a54090cf4fffe439978194c08\""
	I0526 21:25:10.773446  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:29.623169879Z" level=info msg="StartContainer for \"e6bb9bee7539aa8f41eb665706dd077c17b8c28a54090cf4fffe439978194c08\""
	I0526 21:25:10.773470  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:29.681255114Z" level=info msg="StartContainer for \"2314e41b1b44395e7b5a7c32d6859941d4a5db601171f0dd18ddefb571d25f18\" returns successfully"
	I0526 21:25:10.773498  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:29.683704929Z" level=info msg="CreateContainer within sandbox \"2ad404c6a9c449ae1ebfab12355673229979a8ee4cf4d87f94b5ca073d31b43e\" for &ContainerMetadata{Name:etcd,Attempt:0,} returns container id \"c8538106e966bebc2d596c1688fdb11c413a37e122bd18c454b4db5bec7f55ad\""
	I0526 21:25:10.773522  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:29.684577023Z" level=info msg="StartContainer for \"c8538106e966bebc2d596c1688fdb11c413a37e122bd18c454b4db5bec7f55ad\""
	I0526 21:25:10.773546  527485 command_runner.go:124] > May 26 21:23:30 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:30.017920282Z" level=info msg="StartContainer for \"c8538106e966bebc2d596c1688fdb11c413a37e122bd18c454b4db5bec7f55ad\" returns successfully"
	I0526 21:25:10.773564  527485 command_runner.go:124] > May 26 21:23:30 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:30.056525418Z" level=info msg="StartContainer for \"e6bb9bee7539aa8f41eb665706dd077c17b8c28a54090cf4fffe439978194c08\" returns successfully"
	I0526 21:25:10.773635  527485 command_runner.go:124] > May 26 21:23:53 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:53.290788536Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
	I0526 21:25:10.773663  527485 command_runner.go:124] > May 26 21:23:53 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:53.802102062Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:kindnet-2wgbs,Uid:aac3ff91-8f9c-4f4e-81fc-a859f780d67d,Namespace:kube-system,Attempt:0,}"
	I0526 21:25:10.773691  527485 command_runner.go:124] > May 26 21:23:53 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:53.839975209Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/53490c652b9e5b3b552f7ca74d9a84e9c42a1849a932e5f024b22d340d5734e8 pid=2987
	I0526 21:25:10.773715  527485 command_runner.go:124] > May 26 21:23:53 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:53.915628984Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:kube-proxy-qbl42,Uid:950a915d-c5f0-4e6f-bc12-ee97013032f0,Namespace:kube-system,Attempt:0,}"
	I0526 21:25:10.773738  527485 command_runner.go:124] > May 26 21:23:53 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:53.950847165Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/038c42970362d9798abb36c3983856aa352e67a59ca9ce5f3e1852c03634a59a pid=3013
	I0526 21:25:10.773769  527485 command_runner.go:124] > May 26 21:23:54 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:54.116312794Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qbl42,Uid:950a915d-c5f0-4e6f-bc12-ee97013032f0,Namespace:kube-system,Attempt:0,} returns sandbox id \"038c42970362d9798abb36c3983856aa352e67a59ca9ce5f3e1852c03634a59a\""
	I0526 21:25:10.773796  527485 command_runner.go:124] > May 26 21:23:54 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:54.127305490Z" level=info msg="CreateContainer within sandbox \"038c42970362d9798abb36c3983856aa352e67a59ca9ce5f3e1852c03634a59a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}"
	I0526 21:25:10.773828  527485 command_runner.go:124] > May 26 21:23:54 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:54.182202148Z" level=info msg="CreateContainer within sandbox \"038c42970362d9798abb36c3983856aa352e67a59ca9ce5f3e1852c03634a59a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"de6efc6fec4b20903efb849817a9e57e284841a9fc6ef9bd1b83e563316cdaa2\""
	I0526 21:25:10.773851  527485 command_runner.go:124] > May 26 21:23:54 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:54.188910123Z" level=info msg="StartContainer for \"de6efc6fec4b20903efb849817a9e57e284841a9fc6ef9bd1b83e563316cdaa2\""
	I0526 21:25:10.773875  527485 command_runner.go:124] > May 26 21:23:54 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:54.381612238Z" level=info msg="StartContainer for \"de6efc6fec4b20903efb849817a9e57e284841a9fc6ef9bd1b83e563316cdaa2\" returns successfully"
	I0526 21:25:10.773900  527485 command_runner.go:124] > May 26 21:23:55 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:55.674364903Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/library/minikube-local-cache-test:functional-20210526211257-510955,Labels:map[string]string{},XXX_unrecognized:[],}"
	I0526 21:25:10.773928  527485 command_runner.go:124] > May 26 21:23:55 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:55.683119285Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d019ff3187ef5660d1df17b8caf469d5fc50b72267134348e040397c4d49d830,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
	I0526 21:25:10.773960  527485 command_runner.go:124] > May 26 21:23:55 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:55.683711665Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/library/minikube-local-cache-test:functional-20210526211257-510955,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
	I0526 21:25:10.773983  527485 command_runner.go:124] > May 26 21:23:56 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:56.582858367Z" level=error msg="get state for 53490c652b9e5b3b552f7ca74d9a84e9c42a1849a932e5f024b22d340d5734e8" error="context deadline exceeded: unknown"
	I0526 21:25:10.773999  527485 command_runner.go:124] > May 26 21:23:56 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:56.582967226Z" level=warning msg="unknown status" status=0
	I0526 21:25:10.774031  527485 command_runner.go:124] > May 26 21:23:56 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:56.969753374Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kindnet-2wgbs,Uid:aac3ff91-8f9c-4f4e-81fc-a859f780d67d,Namespace:kube-system,Attempt:0,} returns sandbox id \"53490c652b9e5b3b552f7ca74d9a84e9c42a1849a932e5f024b22d340d5734e8\""
	I0526 21:25:10.774059  527485 command_runner.go:124] > May 26 21:23:56 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:56.975070195Z" level=info msg="CreateContainer within sandbox \"53490c652b9e5b3b552f7ca74d9a84e9c42a1849a932e5f024b22d340d5734e8\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:0,}"
	I0526 21:25:10.774086  527485 command_runner.go:124] > May 26 21:23:57 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:57.027887855Z" level=info msg="CreateContainer within sandbox \"53490c652b9e5b3b552f7ca74d9a84e9c42a1849a932e5f024b22d340d5734e8\" for &ContainerMetadata{Name:kindnet-cni,Attempt:0,} returns container id \"69df1859ce4d1a30c4660b7f63cb09e13d69f3813d39620e6ca8dc830b4388bf\""
	I0526 21:25:10.774108  527485 command_runner.go:124] > May 26 21:23:57 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:57.029566085Z" level=info msg="StartContainer for \"69df1859ce4d1a30c4660b7f63cb09e13d69f3813d39620e6ca8dc830b4388bf\""
	I0526 21:25:10.774136  527485 command_runner.go:124] > May 26 21:23:57 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:57.574608517Z" level=info msg="StartContainer for \"69df1859ce4d1a30c4660b7f63cb09e13d69f3813d39620e6ca8dc830b4388bf\" returns successfully"
	I0526 21:25:10.774158  527485 command_runner.go:124] > May 26 21:24:09 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:24:09.297649575Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:storage-provisioner,Uid:e29f2487-1ac5-4bb4-9ab8-6b5d5c43ec36,Namespace:kube-system,Attempt:0,}"
	I0526 21:25:10.774183  527485 command_runner.go:124] > May 26 21:24:09 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:24:09.323344186Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:coredns-74ff55c5b-tw67b,Uid:a0522c32-9960-4c21-8a5a-d0b137009166,Namespace:kube-system,Attempt:0,}"
	I0526 21:25:10.774211  527485 command_runner.go:124] > May 26 21:24:09 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:24:09.332120092Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/722b1b257c571a73cb4686c5476aba37030267ae95e826e47362be9c166adb55 pid=3313
	I0526 21:25:10.774238  527485 command_runner.go:124] > May 26 21:24:09 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:24:09.442356819Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1d96eb581f035bbd8a09d1caefefe610196dd7fb21d1b74e5f155bddc0a54900 pid=3376
	I0526 21:25:10.774269  527485 command_runner.go:124] > May 26 21:24:09 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:24:09.792546853Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:storage-provisioner,Uid:e29f2487-1ac5-4bb4-9ab8-6b5d5c43ec36,Namespace:kube-system,Attempt:0,} returns sandbox id \"722b1b257c571a73cb4686c5476aba37030267ae95e826e47362be9c166adb55\""
	I0526 21:25:10.774301  527485 command_runner.go:124] > May 26 21:24:09 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:24:09.796339883Z" level=info msg="CreateContainer within sandbox \"722b1b257c571a73cb4686c5476aba37030267ae95e826e47362be9c166adb55\" for container &ContainerMetadata{Name:storage-provisioner,Attempt:0,}"
	I0526 21:25:10.774329  527485 command_runner.go:124] > May 26 21:24:09 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:24:09.843281999Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-74ff55c5b-tw67b,Uid:a0522c32-9960-4c21-8a5a-d0b137009166,Namespace:kube-system,Attempt:0,} returns sandbox id \"1d96eb581f035bbd8a09d1caefefe610196dd7fb21d1b74e5f155bddc0a54900\""
	I0526 21:25:10.774358  527485 command_runner.go:124] > May 26 21:24:09 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:24:09.849108598Z" level=info msg="CreateContainer within sandbox \"1d96eb581f035bbd8a09d1caefefe610196dd7fb21d1b74e5f155bddc0a54900\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
	I0526 21:25:10.774392  527485 command_runner.go:124] > May 26 21:24:09 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:24:09.875948742Z" level=info msg="CreateContainer within sandbox \"722b1b257c571a73cb4686c5476aba37030267ae95e826e47362be9c166adb55\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"5d3df8c94eaedafbc7bb1e79012cbc72bc2ec88687f4bfc6f10fff8c1eb3107d\""
	I0526 21:25:10.774411  527485 command_runner.go:124] > May 26 21:24:09 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:24:09.879073015Z" level=info msg="StartContainer for \"5d3df8c94eaedafbc7bb1e79012cbc72bc2ec88687f4bfc6f10fff8c1eb3107d\""
	I0526 21:25:10.774443  527485 command_runner.go:124] > May 26 21:24:09 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:24:09.915826719Z" level=info msg="CreateContainer within sandbox \"1d96eb581f035bbd8a09d1caefefe610196dd7fb21d1b74e5f155bddc0a54900\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a9593dff4428d4d0f5cc31832cefb42bf39b62a98d32d295dd9502a1ef6d307a\""
	I0526 21:25:10.774468  527485 command_runner.go:124] > May 26 21:24:09 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:24:09.918179651Z" level=info msg="StartContainer for \"a9593dff4428d4d0f5cc31832cefb42bf39b62a98d32d295dd9502a1ef6d307a\""
	I0526 21:25:10.774491  527485 command_runner.go:124] > May 26 21:24:10 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:24:10.083539707Z" level=info msg="StartContainer for \"5d3df8c94eaedafbc7bb1e79012cbc72bc2ec88687f4bfc6f10fff8c1eb3107d\" returns successfully"
	I0526 21:25:10.774511  527485 command_runner.go:124] > May 26 21:24:10 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:24:10.120722012Z" level=info msg="StartContainer for \"a9593dff4428d4d0f5cc31832cefb42bf39b62a98d32d295dd9502a1ef6d307a\" returns successfully"
	I0526 21:25:10.791687  527485 logs.go:123] Gathering logs for kubelet ...
	I0526 21:25:10.791704  527485 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0526 21:25:10.816785  527485 command_runner.go:124] > -- Logs begin at Wed 2021-05-26 21:22:49 UTC, end at Wed 2021-05-26 21:25:10 UTC. --
	I0526 21:25:10.816809  527485 command_runner.go:124] > May 26 21:23:21 multinode-20210526212238-510955 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0526 21:25:10.816837  527485 command_runner.go:124] > May 26 21:23:21 multinode-20210526212238-510955 kubelet[2343]: Flag --runtime-request-timeout has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	I0526 21:25:10.816878  527485 command_runner.go:124] > May 26 21:23:21 multinode-20210526212238-510955 kubelet[2343]: Flag --runtime-request-timeout has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	I0526 21:25:10.816898  527485 command_runner.go:124] > May 26 21:23:22 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:22.365155    2343 server.go:416] Version: v1.20.2
	I0526 21:25:10.816915  527485 command_runner.go:124] > May 26 21:23:22 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:22.365664    2343 server.go:837] Client rotation is on, will bootstrap in background
	I0526 21:25:10.816937  527485 command_runner.go:124] > May 26 21:23:22 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:22.382328    2343 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	I0526 21:25:10.816971  527485 command_runner.go:124] > May 26 21:23:22 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:22.383887    2343 certificate_manager.go:437] Failed while requesting a signed certificate from the master: cannot create certificate signing request: Post "https://control-plane.minikube.internal:8443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:10.817007  527485 command_runner.go:124] > May 26 21:23:24 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:24.586559    2343 certificate_manager.go:437] Failed while requesting a signed certificate from the master: cannot create certificate signing request: Post "https://control-plane.minikube.internal:8443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:10.817034  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.392858    2343 server.go:645] --cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /
	I0526 21:25:10.817055  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.393993    2343 container_manager_linux.go:274] container manager verified user specified cgroup-root exists: []
	I0526 21:25:10.817119  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.394298    2343 container_manager_linux.go:279] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:remote CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalTopologyManagerScope:container ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none}
	I0526 21:25:10.817142  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.395126    2343 topology_manager.go:120] [topologymanager] Creating topology manager with none policy per container scope
	I0526 21:25:10.817161  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.395334    2343 container_manager_linux.go:310] [topologymanager] Initializing Topology Manager with none policy and container-level scope
	I0526 21:25:10.817179  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.395348    2343 container_manager_linux.go:315] Creating device plugin manager: true
	I0526 21:25:10.817195  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.395816    2343 remote_runtime.go:62] parsed scheme: ""
	I0526 21:25:10.817219  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.395929    2343 remote_runtime.go:62] scheme "" not registered, fallback to default scheme
	I0526 21:25:10.817244  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.396315    2343 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}
	I0526 21:25:10.817262  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.396571    2343 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0526 21:25:10.817283  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.397666    2343 remote_image.go:50] parsed scheme: ""
	I0526 21:25:10.817303  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.397691    2343 remote_image.go:50] scheme "" not registered, fallback to default scheme
	I0526 21:25:10.817326  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.397829    2343 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}
	I0526 21:25:10.817342  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.397957    2343 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0526 21:25:10.817355  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.400786    2343 kubelet.go:262] Adding pod path: /etc/kubernetes/manifests
	I0526 21:25:10.817364  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.401761    2343 kubelet.go:273] Watching apiserver
	I0526 21:25:10.817388  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:27.419726    2343 reflector.go:138] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://control-plane.minikube.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%!D(MISSING)multinode-20210526212238-510955&limit=500&resourceVersion=0": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:10.817413  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:27.433343    2343 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:10.817430  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.434846    2343 kuberuntime_manager.go:216] Container runtime containerd initialized, version: v1.4.4, apiVersion: v1alpha2
	I0526 21:25:10.817454  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:27.435179    2343 reflector.go:138] k8s.io/kubernetes/pkg/kubelet/kubelet.go:438: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-20210526212238-510955&limit=500&resourceVersion=0": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:10.817469  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:27.695431    2343 aws_credentials.go:77] while getting AWS credentials NoCredentialProviders: no valid providers in chain. Deprecated.
	I0526 21:25:10.817480  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]:         For verbose messaging see aws.Config.CredentialsChainVerboseErrors
	I0526 21:25:10.817494  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: W0526 21:23:27.696850    2343 probe.go:268] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
	I0526 21:25:10.817506  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.698714    2343 server.go:1176] Started kubelet
	I0526 21:25:10.817516  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.699681    2343 server.go:148] Starting to listen on 0.0.0.0:10250
	I0526 21:25:10.817528  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.701131    2343 server.go:410] Adding debug handlers to kubelet server.
	I0526 21:25:10.817610  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:27.701698    2343 event.go:273] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"multinode-20210526212238-510955.1682bacd86c17a5a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"multinode-20210526212238-510955", UID:"multinode-20210526212238-510955", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.",
Source:v1.EventSource{Component:"kubelet", Host:"multinode-20210526212238-510955"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc023ccf3e9a5245a, ext:5868438524, loc:(*time.Location)(0x70d1080)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc023ccf3e9a5245a, ext:5868438524, loc:(*time.Location)(0x70d1080)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events": dial tcp 192.168.39.229:8443: connect: connection refused'(may retry after sleeping)
	I0526 21:25:10.817626  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.703923    2343 fs_resource_analyzer.go:64] Starting FS ResourceAnalyzer
	I0526 21:25:10.817637  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.707734    2343 volume_manager.go:271] Starting Kubelet Volume Manager
	I0526 21:25:10.817649  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.708096    2343 desired_state_of_world_populator.go:142] Desired state populator starts to run
	I0526 21:25:10.817680  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:27.708889    2343 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:10.817719  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:27.709701    2343 controller.go:144] failed to ensure lease exists, will retry in 200ms, error: Get "https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-20210526212238-510955?timeout=10s": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:10.817746  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:27.711040    2343 kubelet.go:2163] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	I0526 21:25:10.817763  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.711583    2343 client.go:86] parsed scheme: "unix"
	I0526 21:25:10.817779  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.711779    2343 client.go:86] scheme "unix" not registered, fallback to default scheme
	I0526 21:25:10.817796  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.712280    2343 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}
	I0526 21:25:10.817808  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.712673    2343 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0526 21:25:10.817820  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.782226    2343 cpu_manager.go:193] [cpumanager] starting with none policy
	I0526 21:25:10.817833  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.782318    2343 cpu_manager.go:194] [cpumanager] reconciling every 10s
	I0526 21:25:10.817852  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.782638    2343 state_mem.go:36] [cpumanager] initializing new in-memory state store
	I0526 21:25:10.817952  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:27.799125    2343 event.go:273] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"multinode-20210526212238-510955.1682bacd86c17a5a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"multinode-20210526212238-510955", UID:"multinode-20210526212238-510955", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.",
Source:v1.EventSource{Component:"kubelet", Host:"multinode-20210526212238-510955"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc023ccf3e9a5245a, ext:5868438524, loc:(*time.Location)(0x70d1080)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc023ccf3e9a5245a, ext:5868438524, loc:(*time.Location)(0x70d1080)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events": dial tcp 192.168.39.229:8443: connect: connection refused'(may retry after sleeping)
	I0526 21:25:10.817986  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.809183    2343 kubelet_node_status.go:71] Attempting to register node multinode-20210526212238-510955
	I0526 21:25:10.818004  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:27.810505    2343 kubelet_node_status.go:93] Unable to register node "multinode-20210526212238-510955" with API server: Post "https://control-plane.minikube.internal:8443/api/v1/nodes": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:10.818016  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:27.810636    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:10.818030  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.876097    2343 kubelet_network_linux.go:56] Initialized IPv4 iptables rules.
	I0526 21:25:10.818043  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.876127    2343 status_manager.go:158] Starting to sync pod status with apiserver
	I0526 21:25:10.818056  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.876145    2343 kubelet.go:1802] Starting kubelet main sync loop.
	I0526 21:25:10.818074  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:27.876191    2343 kubelet.go:1826] skipping pod synchronization - [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
	I0526 21:25:10.818097  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:27.877853    2343 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:10.818120  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:27.910604    2343 controller.go:144] failed to ensure lease exists, will retry in 400ms, error: Get "https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-20210526212238-510955?timeout=10s": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:10.818134  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:27.910787    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:10.818151  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:27.976408    2343 kubelet.go:1826] skipping pod synchronization - container runtime status check may not have completed yet
	I0526 21:25:10.818165  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.987845    2343 policy_none.go:43] [cpumanager] none policy: Start
	I0526 21:25:10.818204  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: W0526 21:23:28.000709    2343 manager.go:594] Failed to retrieve checkpoint for "kubelet_internal_checkpoint": checkpoint is not found
	I0526 21:25:10.818223  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:28.001042    2343 plugin_manager.go:114] Starting Kubelet Plugin Manager
	I0526 21:25:10.818247  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:28.004395    2343 eviction_manager.go:260] eviction manager: failed to get summary stats: failed to get node info: node "multinode-20210526212238-510955" not found
	I0526 21:25:10.818267  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:28.010900    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:10.818293  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:28.011906    2343 kubelet_node_status.go:71] Attempting to register node multinode-20210526212238-510955
	I0526 21:25:10.818323  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:28.012281    2343 kubelet_node_status.go:93] Unable to register node "multinode-20210526212238-510955" with API server: Post "https://control-plane.minikube.internal:8443/api/v1/nodes": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:10.818343  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:28.111839    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:10.818362  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:28.177382    2343 topology_manager.go:187] [topologymanager] Topology Admit Handler
	I0526 21:25:10.818382  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:28.180087    2343 topology_manager.go:187] [topologymanager] Topology Admit Handler
	I0526 21:25:10.818402  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:28.181373    2343 topology_manager.go:187] [topologymanager] Topology Admit Handler
	I0526 21:25:10.818421  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:28.182941    2343 topology_manager.go:187] [topologymanager] Topology Admit Handler
	I0526 21:25:10.818459  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: W0526 21:23:28.185069    2343 status_manager.go:550] Failed to get status for pod "kube-controller-manager-multinode-20210526212238-510955_kube-system(474c55dfb64741cc485e46b6bb9f2dc0)": Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20210526212238-510955": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:10.818497  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: W0526 21:23:28.185417    2343 status_manager.go:550] Failed to get status for pod "kube-scheduler-multinode-20210526212238-510955_kube-system(6b4a0ee8b3d15a1c2e47c15d32e6eb0d)": Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-20210526212238-510955": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:10.818538  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: W0526 21:23:28.201047    2343 status_manager.go:550] Failed to get status for pod "kube-apiserver-multinode-20210526212238-510955_kube-system(b42b6879229f245abab6047de8662a2f)": Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-20210526212238-510955": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:10.818574  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: W0526 21:23:28.202364    2343 status_manager.go:550] Failed to get status for pod "etcd-multinode-20210526212238-510955_kube-system(34530b4d5ce1b17919f3b8976b2d0456)": Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:10.818594  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:28.212215    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:10.818621  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:28.309602    2343 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/b42b6879229f245abab6047de8662a2f-ca-certs") pod "kube-apiserver-multinode-20210526212238-510955" (UID: "b42b6879229f245abab6047de8662a2f")
	I0526 21:25:10.818643  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:28.309839    2343 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/b42b6879229f245abab6047de8662a2f-k8s-certs") pod "kube-apiserver-multinode-20210526212238-510955" (UID: "b42b6879229f245abab6047de8662a2f")
	I0526 21:25:10.818673  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:28.310062    2343 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/b42b6879229f245abab6047de8662a2f-usr-share-ca-certificates") pod "kube-apiserver-multinode-20210526212238-510955" (UID: "b42b6879229f245abab6047de8662a2f")
	I0526 21:25:10.818706  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:28.310275    2343 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/474c55dfb64741cc485e46b6bb9f2dc0-ca-certs") pod "kube-controller-manager-multinode-20210526212238-510955" (UID: "474c55dfb64741cc485e46b6bb9f2dc0")
	I0526 21:25:10.818740  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:28.310572    2343 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/474c55dfb64741cc485e46b6bb9f2dc0-k8s-certs") pod "kube-controller-manager-multinode-20210526212238-510955" (UID: "474c55dfb64741cc485e46b6bb9f2dc0")
	I0526 21:25:10.818772  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:28.310900    2343 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/474c55dfb64741cc485e46b6bb9f2dc0-kubeconfig") pod "kube-controller-manager-multinode-20210526212238-510955" (UID: "474c55dfb64741cc485e46b6bb9f2dc0")
	I0526 21:25:10.818805  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:28.311066    2343 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-certs" (UniqueName: "kubernetes.io/host-path/34530b4d5ce1b17919f3b8976b2d0456-etcd-certs") pod "etcd-multinode-20210526212238-510955" (UID: "34530b4d5ce1b17919f3b8976b2d0456")
	I0526 21:25:10.818834  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:28.311200    2343 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "flexvolume-dir" (UniqueName: "kubernetes.io/host-path/474c55dfb64741cc485e46b6bb9f2dc0-flexvolume-dir") pod "kube-controller-manager-multinode-20210526212238-510955" (UID: "474c55dfb64741cc485e46b6bb9f2dc0")
	I0526 21:25:10.818858  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:28.311326    2343 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/474c55dfb64741cc485e46b6bb9f2dc0-usr-share-ca-certificates") pod "kube-controller-manager-multinode-20210526212238-510955" (UID: "474c55dfb64741cc485e46b6bb9f2dc0")
	I0526 21:25:10.818884  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:28.311324    2343 controller.go:144] failed to ensure lease exists, will retry in 800ms, error: Get "https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-20210526212238-510955?timeout=10s": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:10.818909  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:28.311643    2343 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/6b4a0ee8b3d15a1c2e47c15d32e6eb0d-kubeconfig") pod "kube-scheduler-multinode-20210526212238-510955" (UID: "6b4a0ee8b3d15a1c2e47c15d32e6eb0d")
	I0526 21:25:10.818931  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:28.311955    2343 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-data" (UniqueName: "kubernetes.io/host-path/34530b4d5ce1b17919f3b8976b2d0456-etcd-data") pod "etcd-multinode-20210526212238-510955" (UID: "34530b4d5ce1b17919f3b8976b2d0456")
	I0526 21:25:10.818944  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:28.312763    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:10.818969  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:28.318006    2343 reflector.go:138] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://control-plane.minikube.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%!D(MISSING)multinode-20210526212238-510955&limit=500&resourceVersion=0": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:10.818993  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:28.361617    2343 reflector.go:138] k8s.io/kubernetes/pkg/kubelet/kubelet.go:438: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-20210526212238-510955&limit=500&resourceVersion=0": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:10.819006  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:28.412938    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:10.819019  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:28.414299    2343 kubelet_node_status.go:71] Attempting to register node multinode-20210526212238-510955
	I0526 21:25:10.819041  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:28.420140    2343 kubelet_node_status.go:93] Unable to register node "multinode-20210526212238-510955" with API server: Post "https://control-plane.minikube.internal:8443/api/v1/nodes": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:10.819055  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:28.513925    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:10.819068  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:28.614235    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:10.819090  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:28.620010    2343 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:10.819103  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:28.714407    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:10.819124  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:28.717664    2343 certificate_manager.go:437] Failed while requesting a signed certificate from the master: cannot create certificate signing request: Post "https://control-plane.minikube.internal:8443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:10.819137  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:28.815037    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:10.819159  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:28.819848    2343 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:10.819172  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:28.915364    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:10.819186  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:29.015843    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:10.819212  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:29.112804    2343 controller.go:144] failed to ensure lease exists, will retry in 1.6s, error: Get "https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-20210526212238-510955?timeout=10s": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:10.819226  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:29.116234    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:10.819240  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:29.217167    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:10.819263  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:29.219890    2343 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:10.819282  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:29.223096    2343 kubelet_node_status.go:71] Attempting to register node multinode-20210526212238-510955
	I0526 21:25:10.819296  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:29.317528    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:10.819306  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:29.418231    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:10.819326  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:29.419707    2343 kubelet_node_status.go:93] Unable to register node "multinode-20210526212238-510955" with API server: Post "https://control-plane.minikube.internal:8443/api/v1/nodes": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:10.819339  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:29.520018    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:10.819352  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:29.620736    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:10.819364  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:29.721115    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:10.819376  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:29.821411    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:10.819388  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:29.921772    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:10.819398  527485 command_runner.go:124] > May 26 21:23:30 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:30.022147    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:10.819412  527485 command_runner.go:124] > May 26 21:23:30 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:30.122970    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:10.819424  527485 command_runner.go:124] > May 26 21:23:30 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:30.223407    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:10.819437  527485 command_runner.go:124] > May 26 21:23:30 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:30.323609    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:10.819451  527485 command_runner.go:124] > May 26 21:23:30 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:30.424033    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:10.819465  527485 command_runner.go:124] > May 26 21:23:30 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:30.524613    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:10.819477  527485 command_runner.go:124] > May 26 21:23:30 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:30.625186    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:10.819489  527485 command_runner.go:124] > May 26 21:23:30 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:30.725563    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:10.819501  527485 command_runner.go:124] > May 26 21:23:30 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:30.826076    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:10.819516  527485 command_runner.go:124] > May 26 21:23:30 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:30.932677    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:10.819529  527485 command_runner.go:124] > May 26 21:23:31 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:31.021296    2343 kubelet_node_status.go:71] Attempting to register node multinode-20210526212238-510955
	I0526 21:25:10.819568  527485 command_runner.go:124] > May 26 21:23:31 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:31.033185    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:10.819581  527485 command_runner.go:124] > May 26 21:23:31 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:31.133540    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:10.819592  527485 command_runner.go:124] > May 26 21:23:31 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:31.234158    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:10.819605  527485 command_runner.go:124] > May 26 21:23:31 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:31.334934    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:10.819618  527485 command_runner.go:124] > May 26 21:23:31 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:31.435265    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:10.819630  527485 command_runner.go:124] > May 26 21:23:31 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:31.535646    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:10.819643  527485 command_runner.go:124] > May 26 21:23:31 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:31.636091    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:10.819656  527485 command_runner.go:124] > May 26 21:23:31 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:31.736769    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:10.819675  527485 command_runner.go:124] > May 26 21:23:31 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:31.837337    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:10.819692  527485 command_runner.go:124] > May 26 21:23:31 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:31.937851    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:10.819711  527485 command_runner.go:124] > May 26 21:23:32 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:32.038171    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:10.819730  527485 command_runner.go:124] > May 26 21:23:32 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:32.138719    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:10.819750  527485 command_runner.go:124] > May 26 21:23:32 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:32.239058    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:10.819769  527485 command_runner.go:124] > May 26 21:23:32 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:32.339598    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:10.819782  527485 command_runner.go:124] > May 26 21:23:32 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:32.440290    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:10.819794  527485 command_runner.go:124] > May 26 21:23:32 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:32.540624    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:10.819806  527485 command_runner.go:124] > May 26 21:23:32 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:32.641006    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:10.819821  527485 command_runner.go:124] > May 26 21:23:32 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:32.741403    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:10.819840  527485 command_runner.go:124] > May 26 21:23:32 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:32.841966    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:10.819860  527485 command_runner.go:124] > May 26 21:23:32 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:32.942585    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:10.819885  527485 command_runner.go:124] > May 26 21:23:33 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:33.002095    2343 kubelet.go:2163] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	I0526 21:25:10.819904  527485 command_runner.go:124] > May 26 21:23:33 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:33.042747    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:10.819921  527485 command_runner.go:124] > May 26 21:23:33 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:33.142869    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:10.819933  527485 command_runner.go:124] > May 26 21:23:33 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:33.243254    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:10.819946  527485 command_runner.go:124] > May 26 21:23:33 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:33.343706    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:10.819957  527485 command_runner.go:124] > May 26 21:23:33 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:33.444105    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:10.819969  527485 command_runner.go:124] > May 26 21:23:33 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:33.545421    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:10.819981  527485 command_runner.go:124] > May 26 21:23:33 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:33.645867    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:10.819994  527485 command_runner.go:124] > May 26 21:23:33 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:33.746343    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:10.820006  527485 command_runner.go:124] > May 26 21:23:33 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:33.846868    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:10.820019  527485 command_runner.go:124] > May 26 21:23:33 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:33.947104    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:10.820031  527485 command_runner.go:124] > May 26 21:23:34 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:34.047842    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:10.820043  527485 command_runner.go:124] > May 26 21:23:34 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:34.148334    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:10.820055  527485 command_runner.go:124] > May 26 21:23:34 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:34.248550    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:10.820069  527485 command_runner.go:124] > May 26 21:23:34 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:34.349232    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:10.820083  527485 command_runner.go:124] > May 26 21:23:34 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:34.449632    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:10.820098  527485 command_runner.go:124] > May 26 21:23:34 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:34.549987    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:10.820111  527485 command_runner.go:124] > May 26 21:23:34 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:34.650314    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:10.820123  527485 command_runner.go:124] > May 26 21:23:34 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:34.751182    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:10.820133  527485 command_runner.go:124] > May 26 21:23:34 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:34.832693    2343 reconciler.go:157] Reconciler: start to sync state
	I0526 21:25:10.820150  527485 command_runner.go:124] > May 26 21:23:34 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:34.841269    2343 nodelease.go:49] failed to get node "multinode-20210526212238-510955" when trying to set owner ref to the node lease: nodes "multinode-20210526212238-510955" not found
	I0526 21:25:10.820162  527485 command_runner.go:124] > May 26 21:23:34 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:34.851652    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:10.820176  527485 command_runner.go:124] > May 26 21:23:34 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:34.952325    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:10.820189  527485 command_runner.go:124] > May 26 21:23:35 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:35.015600    2343 kubelet_node_status.go:74] Successfully registered node multinode-20210526212238-510955
	I0526 21:25:10.820206  527485 command_runner.go:124] > May 26 21:23:38 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:38.003372    2343 kubelet.go:2163] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	I0526 21:25:10.820221  527485 command_runner.go:124] > May 26 21:23:38 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:38.252332    2343 dynamic_cafile_content.go:182] Shutting down client-ca-bundle::/var/lib/minikube/certs/ca.crt
	I0526 21:25:10.820233  527485 command_runner.go:124] > May 26 21:23:38 multinode-20210526212238-510955 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	I0526 21:25:10.820242  527485 command_runner.go:124] > May 26 21:23:38 multinode-20210526212238-510955 systemd[1]: kubelet.service: Succeeded.
	I0526 21:25:10.820251  527485 command_runner.go:124] > May 26 21:23:38 multinode-20210526212238-510955 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0526 21:25:10.820260  527485 command_runner.go:124] > May 26 21:23:38 multinode-20210526212238-510955 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0526 21:25:10.820281  527485 command_runner.go:124] > May 26 21:23:38 multinode-20210526212238-510955 kubelet[2767]: Flag --runtime-request-timeout has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	I0526 21:25:10.820301  527485 command_runner.go:124] > May 26 21:23:38 multinode-20210526212238-510955 kubelet[2767]: Flag --runtime-request-timeout has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	I0526 21:25:10.820312  527485 command_runner.go:124] > May 26 21:23:38 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:38.567074    2767 server.go:416] Version: v1.20.2
	I0526 21:25:10.820325  527485 command_runner.go:124] > May 26 21:23:38 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:38.569090    2767 server.go:837] Client rotation is on, will bootstrap in background
	I0526 21:25:10.820337  527485 command_runner.go:124] > May 26 21:23:38 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:38.580189    2767 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	I0526 21:25:10.820348  527485 command_runner.go:124] > May 26 21:23:38 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:38.581836    2767 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	I0526 21:25:10.820360  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.594567    2767 server.go:645] --cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /
	I0526 21:25:10.820371  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.596007    2767 container_manager_linux.go:274] container manager verified user specified cgroup-root exists: []
	I0526 21:25:10.820410  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.596173    2767 container_manager_linux.go:279] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:remote CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalTopologyManagerScope:container ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none}
	I0526 21:25:10.820424  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.596418    2767 topology_manager.go:120] [topologymanager] Creating topology manager with none policy per container scope
	I0526 21:25:10.820436  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.596689    2767 container_manager_linux.go:310] [topologymanager] Initializing Topology Manager with none policy and container-level scope
	I0526 21:25:10.820447  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.596801    2767 container_manager_linux.go:315] Creating device plugin manager: true
	I0526 21:25:10.820456  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.597107    2767 remote_runtime.go:62] parsed scheme: ""
	I0526 21:25:10.820467  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.597233    2767 remote_runtime.go:62] scheme "" not registered, fallback to default scheme
	I0526 21:25:10.820480  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.597387    2767 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}
	I0526 21:25:10.820490  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.597579    2767 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0526 21:25:10.820500  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.597846    2767 remote_image.go:50] parsed scheme: ""
	I0526 21:25:10.820510  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.597965    2767 remote_image.go:50] scheme "" not registered, fallback to default scheme
	I0526 21:25:10.820526  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.598781    2767 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}
	I0526 21:25:10.820537  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.598958    2767 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0526 21:25:10.820547  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.599605    2767 kubelet.go:262] Adding pod path: /etc/kubernetes/manifests
	I0526 21:25:10.820556  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.599963    2767 kubelet.go:273] Watching apiserver
	I0526 21:25:10.820568  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.629159    2767 kuberuntime_manager.go:216] Container runtime containerd initialized, version: v1.4.4, apiVersion: v1alpha2
	I0526 21:25:10.820581  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: E0526 21:23:43.914429    2767 aws_credentials.go:77] while getting AWS credentials NoCredentialProviders: no valid providers in chain. Deprecated.
	I0526 21:25:10.820590  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]:         For verbose messaging see aws.Config.CredentialsChainVerboseErrors
	I0526 21:25:10.820600  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.918059    2767 server.go:1176] Started kubelet
	I0526 21:25:10.820610  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.928363    2767 server.go:148] Starting to listen on 0.0.0.0:10250
	I0526 21:25:10.820620  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.931699    2767 server.go:410] Adding debug handlers to kubelet server.
	I0526 21:25:10.820629  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.943931    2767 fs_resource_analyzer.go:64] Starting FS ResourceAnalyzer
	I0526 21:25:10.820639  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.945256    2767 volume_manager.go:271] Starting Kubelet Volume Manager
	I0526 21:25:10.820654  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: E0526 21:23:43.949736    2767 kubelet.go:2163] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	I0526 21:25:10.820666  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.949953    2767 client.go:86] parsed scheme: "unix"
	I0526 21:25:10.820682  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.950079    2767 client.go:86] scheme "unix" not registered, fallback to default scheme
	I0526 21:25:10.820701  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.950244    2767 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}
	I0526 21:25:10.820716  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.950360    2767 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0526 21:25:10.820733  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.960536    2767 desired_state_of_world_populator.go:142] Desired state populator starts to run
	I0526 21:25:10.820750  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.047200    2767 kubelet_node_status.go:71] Attempting to register node multinode-20210526212238-510955
	I0526 21:25:10.820770  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.063101    2767 kubelet_node_status.go:109] Node multinode-20210526212238-510955 was previously registered
	I0526 21:25:10.820781  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.063585    2767 kubelet_node_status.go:74] Successfully registered node multinode-20210526212238-510955
	I0526 21:25:10.820791  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.100820    2767 kubelet_network_linux.go:56] Initialized IPv4 iptables rules.
	I0526 21:25:10.820803  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.100987    2767 status_manager.go:158] Starting to sync pod status with apiserver
	I0526 21:25:10.820813  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.101019    2767 kubelet.go:1802] Starting kubelet main sync loop.
	I0526 21:25:10.820828  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: E0526 21:23:44.101062    2767 kubelet.go:1826] skipping pod synchronization - [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
	I0526 21:25:10.820838  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.167420    2767 cpu_manager.go:193] [cpumanager] starting with none policy
	I0526 21:25:10.820848  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.167823    2767 cpu_manager.go:194] [cpumanager] reconciling every 10s
	I0526 21:25:10.820858  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.167963    2767 state_mem.go:36] [cpumanager] initializing new in-memory state store
	I0526 21:25:10.820876  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.168200    2767 state_mem.go:88] [cpumanager] updated default cpuset: ""
	I0526 21:25:10.820888  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.168317    2767 state_mem.go:96] [cpumanager] updated cpuset assignments: "map[]"
	I0526 21:25:10.820897  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.168438    2767 policy_none.go:43] [cpumanager] none policy: Start
	I0526 21:25:10.820909  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: W0526 21:23:44.170589    2767 manager.go:594] Failed to retrieve checkpoint for "kubelet_internal_checkpoint": checkpoint is not found
	I0526 21:25:10.820919  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.170973    2767 plugin_manager.go:114] Starting Kubelet Plugin Manager
	I0526 21:25:10.820929  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.201167    2767 topology_manager.go:187] [topologymanager] Topology Admit Handler
	I0526 21:25:10.820939  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.201423    2767 topology_manager.go:187] [topologymanager] Topology Admit Handler
	I0526 21:25:10.820949  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.202839    2767 topology_manager.go:187] [topologymanager] Topology Admit Handler
	I0526 21:25:10.820961  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.202968    2767 topology_manager.go:187] [topologymanager] Topology Admit Handler
	I0526 21:25:10.820983  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.349811    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/474c55dfb64741cc485e46b6bb9f2dc0-kubeconfig") pod "kube-controller-manager-multinode-20210526212238-510955" (UID: "474c55dfb64741cc485e46b6bb9f2dc0")
	I0526 21:25:10.821004  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.349855    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/474c55dfb64741cc485e46b6bb9f2dc0-usr-share-ca-certificates") pod "kube-controller-manager-multinode-20210526212238-510955" (UID: "474c55dfb64741cc485e46b6bb9f2dc0")
	I0526 21:25:10.821023  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.349894    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-certs" (UniqueName: "kubernetes.io/host-path/34530b4d5ce1b17919f3b8976b2d0456-etcd-certs") pod "etcd-multinode-20210526212238-510955" (UID: "34530b4d5ce1b17919f3b8976b2d0456")
	I0526 21:25:10.821043  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.349913    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/b42b6879229f245abab6047de8662a2f-ca-certs") pod "kube-apiserver-multinode-20210526212238-510955" (UID: "b42b6879229f245abab6047de8662a2f")
	I0526 21:25:10.821065  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.349921    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "flexvolume-dir" (UniqueName: "kubernetes.io/host-path/474c55dfb64741cc485e46b6bb9f2dc0-flexvolume-dir") pod "kube-controller-manager-multinode-20210526212238-510955" (UID: "474c55dfb64741cc485e46b6bb9f2dc0")
	I0526 21:25:10.821085  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.349921    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/474c55dfb64741cc485e46b6bb9f2dc0-ca-certs") pod "kube-controller-manager-multinode-20210526212238-510955" (UID: "474c55dfb64741cc485e46b6bb9f2dc0")
	I0526 21:25:10.821105  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.349955    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/474c55dfb64741cc485e46b6bb9f2dc0-k8s-certs") pod "kube-controller-manager-multinode-20210526212238-510955" (UID: "474c55dfb64741cc485e46b6bb9f2dc0")
	I0526 21:25:10.821126  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.349955    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/6b4a0ee8b3d15a1c2e47c15d32e6eb0d-kubeconfig") pod "kube-scheduler-multinode-20210526212238-510955" (UID: "6b4a0ee8b3d15a1c2e47c15d32e6eb0d")
	I0526 21:25:10.821144  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.349988    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-data" (UniqueName: "kubernetes.io/host-path/34530b4d5ce1b17919f3b8976b2d0456-etcd-data") pod "etcd-multinode-20210526212238-510955" (UID: "34530b4d5ce1b17919f3b8976b2d0456")
	I0526 21:25:10.821165  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.350013    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/b42b6879229f245abab6047de8662a2f-k8s-certs") pod "kube-apiserver-multinode-20210526212238-510955" (UID: "b42b6879229f245abab6047de8662a2f")
	I0526 21:25:10.821188  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.350027    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/b42b6879229f245abab6047de8662a2f-usr-share-ca-certificates") pod "kube-apiserver-multinode-20210526212238-510955" (UID: "b42b6879229f245abab6047de8662a2f")
	I0526 21:25:10.821198  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.350035    2767 reconciler.go:157] Reconciler: start to sync state
	I0526 21:25:10.821213  527485 command_runner.go:124] > May 26 21:23:49 multinode-20210526212238-510955 kubelet[2767]: E0526 21:23:49.171719    2767 kubelet.go:2163] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	I0526 21:25:10.821224  527485 command_runner.go:124] > May 26 21:23:53 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:53.286184    2767 kuberuntime_manager.go:1006] updating runtime config through cri with podcidr 10.244.0.0/24
	I0526 21:25:10.821234  527485 command_runner.go:124] > May 26 21:23:53 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:53.292064    2767 kubelet_network.go:77] Setting Pod CIDR:  -> 10.244.0.0/24
	I0526 21:25:10.821249  527485 command_runner.go:124] > May 26 21:23:53 multinode-20210526212238-510955 kubelet[2767]: E0526 21:23:53.297677    2767 kubelet.go:2163] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	I0526 21:25:10.821260  527485 command_runner.go:124] > May 26 21:23:53 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:53.473000    2767 topology_manager.go:187] [topologymanager] Topology Admit Handler
	I0526 21:25:10.821282  527485 command_runner.go:124] > May 26 21:23:53 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:53.588715    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "cni-cfg" (UniqueName: "kubernetes.io/host-path/aac3ff91-8f9c-4f4e-81fc-a859f780d67d-cni-cfg") pod "kindnet-2wgbs" (UID: "aac3ff91-8f9c-4f4e-81fc-a859f780d67d")
	I0526 21:25:10.821302  527485 command_runner.go:124] > May 26 21:23:53 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:53.589055    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/aac3ff91-8f9c-4f4e-81fc-a859f780d67d-xtables-lock") pod "kindnet-2wgbs" (UID: "aac3ff91-8f9c-4f4e-81fc-a859f780d67d")
	I0526 21:25:10.821323  527485 command_runner.go:124] > May 26 21:23:53 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:53.589618    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kindnet-token-zm2kt" (UniqueName: "kubernetes.io/secret/aac3ff91-8f9c-4f4e-81fc-a859f780d67d-kindnet-token-zm2kt") pod "kindnet-2wgbs" (UID: "aac3ff91-8f9c-4f4e-81fc-a859f780d67d")
	I0526 21:25:10.821342  527485 command_runner.go:124] > May 26 21:23:53 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:53.589842    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/aac3ff91-8f9c-4f4e-81fc-a859f780d67d-lib-modules") pod "kindnet-2wgbs" (UID: "aac3ff91-8f9c-4f4e-81fc-a859f780d67d")
	I0526 21:25:10.821354  527485 command_runner.go:124] > May 26 21:23:53 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:53.611915    2767 topology_manager.go:187] [topologymanager] Topology Admit Handler
	I0526 21:25:10.821374  527485 command_runner.go:124] > May 26 21:23:53 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:53.791552    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/950a915d-c5f0-4e6f-bc12-ee97013032f0-kube-proxy") pod "kube-proxy-qbl42" (UID: "950a915d-c5f0-4e6f-bc12-ee97013032f0")
	I0526 21:25:10.821396  527485 command_runner.go:124] > May 26 21:23:53 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:53.791755    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/950a915d-c5f0-4e6f-bc12-ee97013032f0-lib-modules") pod "kube-proxy-qbl42" (UID: "950a915d-c5f0-4e6f-bc12-ee97013032f0")
	I0526 21:25:10.821416  527485 command_runner.go:124] > May 26 21:23:53 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:53.791904    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy-token-xd4p4" (UniqueName: "kubernetes.io/secret/950a915d-c5f0-4e6f-bc12-ee97013032f0-kube-proxy-token-xd4p4") pod "kube-proxy-qbl42" (UID: "950a915d-c5f0-4e6f-bc12-ee97013032f0")
	I0526 21:25:10.821436  527485 command_runner.go:124] > May 26 21:23:53 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:53.792035    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/950a915d-c5f0-4e6f-bc12-ee97013032f0-xtables-lock") pod "kube-proxy-qbl42" (UID: "950a915d-c5f0-4e6f-bc12-ee97013032f0")
	I0526 21:25:10.821452  527485 command_runner.go:124] > May 26 21:23:54 multinode-20210526212238-510955 kubelet[2767]: E0526 21:23:54.172944    2767 kubelet.go:2163] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	I0526 21:25:10.821472  527485 command_runner.go:124] > May 26 21:23:56 multinode-20210526212238-510955 kubelet[2767]: E0526 21:23:56.623072    2767 cadvisor_stats_provider.go:401] Partial failure issuing cadvisor.ContainerInfoV2: partial failures: ["/kubepods/besteffort/pod950a915d-c5f0-4e6f-bc12-ee97013032f0/de6efc6fec4b20903efb849817a9e57e284841a9fc6ef9bd1b83e563316cdaa2": RecentStats: unable to find data in memory cache]
	I0526 21:25:10.821483  527485 command_runner.go:124] > May 26 21:24:08 multinode-20210526212238-510955 kubelet[2767]: I0526 21:24:08.993599    2767 topology_manager.go:187] [topologymanager] Topology Admit Handler
	I0526 21:25:10.821493  527485 command_runner.go:124] > May 26 21:24:09 multinode-20210526212238-510955 kubelet[2767]: I0526 21:24:09.010021    2767 topology_manager.go:187] [topologymanager] Topology Admit Handler
	I0526 21:25:10.821521  527485 command_runner.go:124] > May 26 21:24:09 multinode-20210526212238-510955 kubelet[2767]: I0526 21:24:09.159693    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp" (UniqueName: "kubernetes.io/host-path/e29f2487-1ac5-4bb4-9ab8-6b5d5c43ec36-tmp") pod "storage-provisioner" (UID: "e29f2487-1ac5-4bb4-9ab8-6b5d5c43ec36")
	I0526 21:25:10.821545  527485 command_runner.go:124] > May 26 21:24:09 multinode-20210526212238-510955 kubelet[2767]: I0526 21:24:09.159808    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-7ps8h" (UniqueName: "kubernetes.io/secret/a0522c32-9960-4c21-8a5a-d0b137009166-coredns-token-7ps8h") pod "coredns-74ff55c5b-tw67b" (UID: "a0522c32-9960-4c21-8a5a-d0b137009166")
	I0526 21:25:10.821568  527485 command_runner.go:124] > May 26 21:24:09 multinode-20210526212238-510955 kubelet[2767]: I0526 21:24:09.159830    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/a0522c32-9960-4c21-8a5a-d0b137009166-config-volume") pod "coredns-74ff55c5b-tw67b" (UID: "a0522c32-9960-4c21-8a5a-d0b137009166")
	I0526 21:25:10.821592  527485 command_runner.go:124] > May 26 21:24:09 multinode-20210526212238-510955 kubelet[2767]: I0526 21:24:09.159848    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "storage-provisioner-token-hgxxq" (UniqueName: "kubernetes.io/secret/e29f2487-1ac5-4bb4-9ab8-6b5d5c43ec36-storage-provisioner-token-hgxxq") pod "storage-provisioner" (UID: "e29f2487-1ac5-4bb4-9ab8-6b5d5c43ec36")
	I0526 21:25:10.852060  527485 logs.go:123] Gathering logs for describe nodes ...
	I0526 21:25:10.852084  527485 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0526 21:25:11.005822  527485 command_runner.go:124] > Name:               multinode-20210526212238-510955
	I0526 21:25:11.005848  527485 command_runner.go:124] > Roles:              control-plane,master
	I0526 21:25:11.005856  527485 command_runner.go:124] > Labels:             beta.kubernetes.io/arch=amd64
	I0526 21:25:11.005861  527485 command_runner.go:124] >                     beta.kubernetes.io/os=linux
	I0526 21:25:11.005867  527485 command_runner.go:124] >                     kubernetes.io/arch=amd64
	I0526 21:25:11.005877  527485 command_runner.go:124] >                     kubernetes.io/hostname=multinode-20210526212238-510955
	I0526 21:25:11.005882  527485 command_runner.go:124] >                     kubernetes.io/os=linux
	I0526 21:25:11.005890  527485 command_runner.go:124] >                     minikube.k8s.io/commit=1440f8d7119ca73787e7dc88324b0d13449454ff
	I0526 21:25:11.005898  527485 command_runner.go:124] >                     minikube.k8s.io/name=multinode-20210526212238-510955
	I0526 21:25:11.005906  527485 command_runner.go:124] >                     minikube.k8s.io/updated_at=2021_05_26T21_23_38_0700
	I0526 21:25:11.005915  527485 command_runner.go:124] >                     minikube.k8s.io/version=v1.20.0
	I0526 21:25:11.005921  527485 command_runner.go:124] >                     node-role.kubernetes.io/control-plane=
	I0526 21:25:11.005927  527485 command_runner.go:124] >                     node-role.kubernetes.io/master=
	I0526 21:25:11.005937  527485 command_runner.go:124] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	I0526 21:25:11.005948  527485 command_runner.go:124] >                     node.alpha.kubernetes.io/ttl: 0
	I0526 21:25:11.005960  527485 command_runner.go:124] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0526 21:25:11.005971  527485 command_runner.go:124] > CreationTimestamp:  Wed, 26 May 2021 21:23:34 +0000
	I0526 21:25:11.005992  527485 command_runner.go:124] > Taints:             <none>
	I0526 21:25:11.006002  527485 command_runner.go:124] > Unschedulable:      false
	I0526 21:25:11.006007  527485 command_runner.go:124] > Lease:
	I0526 21:25:11.006017  527485 command_runner.go:124] >   HolderIdentity:  multinode-20210526212238-510955
	I0526 21:25:11.006027  527485 command_runner.go:124] >   AcquireTime:     <unset>
	I0526 21:25:11.006036  527485 command_runner.go:124] >   RenewTime:       Wed, 26 May 2021 21:25:04 +0000
	I0526 21:25:11.006041  527485 command_runner.go:124] > Conditions:
	I0526 21:25:11.006056  527485 command_runner.go:124] >   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	I0526 21:25:11.006074  527485 command_runner.go:124] >   ----             ------  -----------------                 ------------------                ------                       -------
	I0526 21:25:11.006095  527485 command_runner.go:124] >   MemoryPressure   False   Wed, 26 May 2021 21:24:14 +0000   Wed, 26 May 2021 21:23:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	I0526 21:25:11.006138  527485 command_runner.go:124] >   DiskPressure     False   Wed, 26 May 2021 21:24:14 +0000   Wed, 26 May 2021 21:23:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	I0526 21:25:11.006157  527485 command_runner.go:124] >   PIDPressure      False   Wed, 26 May 2021 21:24:14 +0000   Wed, 26 May 2021 21:23:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	I0526 21:25:11.006184  527485 command_runner.go:124] >   Ready            True    Wed, 26 May 2021 21:24:14 +0000   Wed, 26 May 2021 21:24:04 +0000   KubeletReady                 kubelet is posting ready status
	I0526 21:25:11.006193  527485 command_runner.go:124] > Addresses:
	I0526 21:25:11.006200  527485 command_runner.go:124] >   InternalIP:  192.168.39.229
	I0526 21:25:11.006208  527485 command_runner.go:124] >   Hostname:    multinode-20210526212238-510955
	I0526 21:25:11.006212  527485 command_runner.go:124] > Capacity:
	I0526 21:25:11.006220  527485 command_runner.go:124] >   cpu:                2
	I0526 21:25:11.006225  527485 command_runner.go:124] >   ephemeral-storage:  17784752Ki
	I0526 21:25:11.006232  527485 command_runner.go:124] >   hugepages-2Mi:      0
	I0526 21:25:11.006236  527485 command_runner.go:124] >   memory:             2186320Ki
	I0526 21:25:11.006242  527485 command_runner.go:124] >   pods:               110
	I0526 21:25:11.006246  527485 command_runner.go:124] > Allocatable:
	I0526 21:25:11.006252  527485 command_runner.go:124] >   cpu:                2
	I0526 21:25:11.006257  527485 command_runner.go:124] >   ephemeral-storage:  17784752Ki
	I0526 21:25:11.006261  527485 command_runner.go:124] >   hugepages-2Mi:      0
	I0526 21:25:11.006266  527485 command_runner.go:124] >   memory:             2186320Ki
	I0526 21:25:11.006272  527485 command_runner.go:124] >   pods:               110
	I0526 21:25:11.006276  527485 command_runner.go:124] > System Info:
	I0526 21:25:11.006282  527485 command_runner.go:124] >   Machine ID:                 fbd77f9e2b0d4ce7860fb21881bb7ff3
	I0526 21:25:11.006288  527485 command_runner.go:124] >   System UUID:                fbd77f9e-2b0d-4ce7-860f-b21881bb7ff3
	I0526 21:25:11.006295  527485 command_runner.go:124] >   Boot ID:                    9a60591c-de07-4474-bb32-101b0a9643ff
	I0526 21:25:11.006300  527485 command_runner.go:124] >   Kernel Version:             4.19.182
	I0526 21:25:11.006313  527485 command_runner.go:124] >   OS Image:                   Buildroot 2020.02.12
	I0526 21:25:11.006317  527485 command_runner.go:124] >   Operating System:           linux
	I0526 21:25:11.006323  527485 command_runner.go:124] >   Architecture:               amd64
	I0526 21:25:11.006328  527485 command_runner.go:124] >   Container Runtime Version:  containerd://1.4.4
	I0526 21:25:11.006334  527485 command_runner.go:124] >   Kubelet Version:            v1.20.2
	I0526 21:25:11.006339  527485 command_runner.go:124] >   Kube-Proxy Version:         v1.20.2
	I0526 21:25:11.006345  527485 command_runner.go:124] > PodCIDR:                      10.244.0.0/24
	I0526 21:25:11.006350  527485 command_runner.go:124] > PodCIDRs:                     10.244.0.0/24
	I0526 21:25:11.006357  527485 command_runner.go:124] > Non-terminated Pods:          (8 in total)
	I0526 21:25:11.006368  527485 command_runner.go:124] >   Namespace                   Name                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	I0526 21:25:11.006382  527485 command_runner.go:124] >   ---------                   ----                                                       ------------  ----------  ---------------  -------------  ---
	I0526 21:25:11.006395  527485 command_runner.go:124] >   kube-system                 coredns-74ff55c5b-tw67b                                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (7%!)(MISSING)     78s
	I0526 21:25:11.006407  527485 command_runner.go:124] >   kube-system                 etcd-multinode-20210526212238-510955                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         87s
	I0526 21:25:11.006419  527485 command_runner.go:124] >   kube-system                 kindnet-2wgbs                                              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      78s
	I0526 21:25:11.006430  527485 command_runner.go:124] >   kube-system                 kube-apiserver-multinode-20210526212238-510955             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         87s
	I0526 21:25:11.006451  527485 command_runner.go:124] >   kube-system                 kube-controller-manager-multinode-20210526212238-510955    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         87s
	I0526 21:25:11.006465  527485 command_runner.go:124] >   kube-system                 kube-proxy-qbl42                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         78s
	I0526 21:25:11.006478  527485 command_runner.go:124] >   kube-system                 kube-scheduler-multinode-20210526212238-510955             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         87s
	I0526 21:25:11.006490  527485 command_runner.go:124] >   kube-system                 storage-provisioner                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         76s
	I0526 21:25:11.006496  527485 command_runner.go:124] > Allocated resources:
	I0526 21:25:11.006501  527485 command_runner.go:124] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0526 21:25:11.006509  527485 command_runner.go:124] >   Resource           Requests     Limits
	I0526 21:25:11.006514  527485 command_runner.go:124] >   --------           --------     ------
	I0526 21:25:11.006521  527485 command_runner.go:124] >   cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	I0526 21:25:11.006526  527485 command_runner.go:124] >   memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	I0526 21:25:11.006533  527485 command_runner.go:124] >   ephemeral-storage  100Mi (0%!)(MISSING)   0 (0%!)(MISSING)
	I0526 21:25:11.006539  527485 command_runner.go:124] >   hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	I0526 21:25:11.006545  527485 command_runner.go:124] > Events:
	I0526 21:25:11.006552  527485 command_runner.go:124] >   Type    Reason                   Age                  From        Message
	I0526 21:25:11.006561  527485 command_runner.go:124] >   ----    ------                   ----                 ----        -------
	I0526 21:25:11.006568  527485 command_runner.go:124] >   Normal  Starting                 104s                 kubelet     Starting kubelet.
	I0526 21:25:11.006580  527485 command_runner.go:124] >   Normal  NodeHasSufficientMemory  103s (x4 over 104s)  kubelet     Node multinode-20210526212238-510955 status is now: NodeHasSufficientMemory
	I0526 21:25:11.006592  527485 command_runner.go:124] >   Normal  NodeHasNoDiskPressure    103s (x3 over 104s)  kubelet     Node multinode-20210526212238-510955 status is now: NodeHasNoDiskPressure
	I0526 21:25:11.006603  527485 command_runner.go:124] >   Normal  NodeHasSufficientPID     103s (x3 over 104s)  kubelet     Node multinode-20210526212238-510955 status is now: NodeHasSufficientPID
	I0526 21:25:11.006615  527485 command_runner.go:124] >   Normal  NodeAllocatableEnforced  103s                 kubelet     Updated Node Allocatable limit across pods
	I0526 21:25:11.006622  527485 command_runner.go:124] >   Normal  Starting                 88s                  kubelet     Starting kubelet.
	I0526 21:25:11.006634  527485 command_runner.go:124] >   Normal  NodeHasSufficientMemory  87s                  kubelet     Node multinode-20210526212238-510955 status is now: NodeHasSufficientMemory
	I0526 21:25:11.006644  527485 command_runner.go:124] >   Normal  NodeHasNoDiskPressure    87s                  kubelet     Node multinode-20210526212238-510955 status is now: NodeHasNoDiskPressure
	I0526 21:25:11.006655  527485 command_runner.go:124] >   Normal  NodeHasSufficientPID     87s                  kubelet     Node multinode-20210526212238-510955 status is now: NodeHasSufficientPID
	I0526 21:25:11.006670  527485 command_runner.go:124] >   Normal  NodeAllocatableEnforced  87s                  kubelet     Updated Node Allocatable limit across pods
	I0526 21:25:11.006684  527485 command_runner.go:124] >   Normal  Starting                 77s                  kube-proxy  Starting kube-proxy.
	I0526 21:25:11.006702  527485 command_runner.go:124] >   Normal  NodeReady                67s                  kubelet     Node multinode-20210526212238-510955 status is now: NodeReady
	I0526 21:25:11.009708  527485 logs.go:123] Gathering logs for kube-controller-manager [2314e41b1b44395e7b5a7c32d6859941d4a5db601171f0dd18ddefb571d25f18] ...
	I0526 21:25:11.009735  527485 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 2314e41b1b44395e7b5a7c32d6859941d4a5db601171f0dd18ddefb571d25f18"
	I0526 21:25:11.032379  527485 command_runner.go:124] ! Flag --port has been deprecated, see --secure-port instead.
	I0526 21:25:11.032406  527485 command_runner.go:124] ! I0526 21:23:30.770698       1 serving.go:331] Generated self-signed cert in-memory
	I0526 21:25:11.032417  527485 command_runner.go:124] ! I0526 21:23:31.105740       1 controllermanager.go:176] Version: v1.20.2
	I0526 21:25:11.032433  527485 command_runner.go:124] ! I0526 21:23:31.110528       1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
	I0526 21:25:11.032447  527485 command_runner.go:124] ! I0526 21:23:31.110685       1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	I0526 21:25:11.032466  527485 command_runner.go:124] ! I0526 21:23:31.111406       1 secure_serving.go:197] Serving securely on 127.0.0.1:10257
	I0526 21:25:11.032479  527485 command_runner.go:124] ! I0526 21:23:31.111685       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I0526 21:25:11.032495  527485 command_runner.go:124] ! I0526 21:23:37.283320       1 shared_informer.go:240] Waiting for caches to sync for tokens
	I0526 21:25:11.032508  527485 command_runner.go:124] ! I0526 21:23:37.384858       1 shared_informer.go:247] Caches are synced for tokens 
	I0526 21:25:11.032520  527485 command_runner.go:124] ! I0526 21:23:37.398260       1 controllermanager.go:554] Started "csrcleaner"
	I0526 21:25:11.032537  527485 command_runner.go:124] ! I0526 21:23:37.398681       1 cleaner.go:82] Starting CSR cleaner controller
	I0526 21:25:11.032562  527485 command_runner.go:124] ! I0526 21:23:37.436326       1 controllermanager.go:554] Started "tokencleaner"
	I0526 21:25:11.032577  527485 command_runner.go:124] ! I0526 21:23:37.436948       1 tokencleaner.go:118] Starting token cleaner controller
	I0526 21:25:11.032593  527485 command_runner.go:124] ! I0526 21:23:37.437051       1 shared_informer.go:240] Waiting for caches to sync for token_cleaner
	I0526 21:25:11.032609  527485 command_runner.go:124] ! I0526 21:23:37.437060       1 shared_informer.go:247] Caches are synced for token_cleaner 
	I0526 21:25:11.032628  527485 command_runner.go:124] ! E0526 21:23:37.458692       1 core.go:92] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail
	I0526 21:25:11.032646  527485 command_runner.go:124] ! W0526 21:23:37.458788       1 controllermanager.go:546] Skipping "service"
	I0526 21:25:11.032662  527485 command_runner.go:124] ! I0526 21:23:37.485897       1 controllermanager.go:554] Started "root-ca-cert-publisher"
	I0526 21:25:11.032676  527485 command_runner.go:124] ! W0526 21:23:37.486148       1 controllermanager.go:546] Skipping "ephemeral-volume"
	I0526 21:25:11.032690  527485 command_runner.go:124] ! I0526 21:23:37.486971       1 publisher.go:98] Starting root CA certificate configmap publisher
	I0526 21:25:11.032704  527485 command_runner.go:124] ! I0526 21:23:37.487325       1 shared_informer.go:240] Waiting for caches to sync for crt configmap
	I0526 21:25:11.032718  527485 command_runner.go:124] ! I0526 21:23:37.514186       1 controllermanager.go:554] Started "endpointslicemirroring"
	I0526 21:25:11.032737  527485 command_runner.go:124] ! I0526 21:23:37.515190       1 endpointslicemirroring_controller.go:211] Starting EndpointSliceMirroring controller
	I0526 21:25:11.032751  527485 command_runner.go:124] ! I0526 21:23:37.515570       1 shared_informer.go:240] Waiting for caches to sync for endpoint_slice_mirroring
	I0526 21:25:11.032772  527485 command_runner.go:124] ! I0526 21:23:37.550580       1 controllermanager.go:554] Started "replicaset"
	I0526 21:25:11.032787  527485 command_runner.go:124] ! I0526 21:23:37.551218       1 replica_set.go:182] Starting replicaset controller
	I0526 21:25:11.032807  527485 command_runner.go:124] ! I0526 21:23:37.551414       1 shared_informer.go:240] Waiting for caches to sync for ReplicaSet
	I0526 21:25:11.032822  527485 command_runner.go:124] ! I0526 21:23:37.987267       1 controllermanager.go:554] Started "horizontalpodautoscaling"
	I0526 21:25:11.032836  527485 command_runner.go:124] ! I0526 21:23:37.988181       1 horizontal.go:169] Starting HPA controller
	I0526 21:25:11.032849  527485 command_runner.go:124] ! I0526 21:23:37.988418       1 shared_informer.go:240] Waiting for caches to sync for HPA
	I0526 21:25:11.032880  527485 command_runner.go:124] ! I0526 21:23:38.238507       1 controllermanager.go:554] Started "persistentvolume-binder"
	I0526 21:25:11.032894  527485 command_runner.go:124] ! I0526 21:23:38.238941       1 pv_controller_base.go:307] Starting persistent volume controller
	I0526 21:25:11.032911  527485 command_runner.go:124] ! I0526 21:23:38.238953       1 shared_informer.go:240] Waiting for caches to sync for persistent volume
	I0526 21:25:11.032926  527485 command_runner.go:124] ! I0526 21:23:38.636899       1 controllermanager.go:554] Started "garbagecollector"
	I0526 21:25:11.032942  527485 command_runner.go:124] ! I0526 21:23:38.636902       1 garbagecollector.go:142] Starting garbage collector controller
	I0526 21:25:11.032957  527485 command_runner.go:124] ! I0526 21:23:38.636960       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	I0526 21:25:11.032970  527485 command_runner.go:124] ! I0526 21:23:38.637525       1 graph_builder.go:289] GraphBuilder running
	I0526 21:25:11.032982  527485 command_runner.go:124] ! I0526 21:23:39.037283       1 controllermanager.go:554] Started "disruption"
	I0526 21:25:11.032997  527485 command_runner.go:124] ! I0526 21:23:39.037574       1 disruption.go:331] Starting disruption controller
	I0526 21:25:11.033012  527485 command_runner.go:124] ! I0526 21:23:39.037585       1 shared_informer.go:240] Waiting for caches to sync for disruption
	I0526 21:25:11.033028  527485 command_runner.go:124] ! I0526 21:23:39.286540       1 controllermanager.go:554] Started "clusterrole-aggregation"
	I0526 21:25:11.033044  527485 command_runner.go:124] ! I0526 21:23:39.286598       1 clusterroleaggregation_controller.go:149] Starting ClusterRoleAggregator
	I0526 21:25:11.033057  527485 command_runner.go:124] ! I0526 21:23:39.286605       1 shared_informer.go:240] Waiting for caches to sync for ClusterRoleAggregator
	I0526 21:25:11.033069  527485 command_runner.go:124] ! I0526 21:23:39.537304       1 controllermanager.go:554] Started "pvc-protection"
	I0526 21:25:11.033089  527485 command_runner.go:124] ! I0526 21:23:39.537579       1 pvc_protection_controller.go:110] Starting PVC protection controller
	I0526 21:25:11.033107  527485 command_runner.go:124] ! I0526 21:23:39.537670       1 shared_informer.go:240] Waiting for caches to sync for PVC protection
	I0526 21:25:11.033121  527485 command_runner.go:124] ! I0526 21:23:39.786982       1 controllermanager.go:554] Started "pv-protection"
	I0526 21:25:11.033135  527485 command_runner.go:124] ! I0526 21:23:39.787110       1 pv_protection_controller.go:83] Starting PV protection controller
	I0526 21:25:11.033150  527485 command_runner.go:124] ! I0526 21:23:39.787118       1 shared_informer.go:240] Waiting for caches to sync for PV protection
	I0526 21:25:11.033164  527485 command_runner.go:124] ! I0526 21:23:40.036383       1 controllermanager.go:554] Started "endpoint"
	I0526 21:25:11.033179  527485 command_runner.go:124] ! I0526 21:23:40.036415       1 endpoints_controller.go:184] Starting endpoint controller
	I0526 21:25:11.033195  527485 command_runner.go:124] ! I0526 21:23:40.037058       1 shared_informer.go:240] Waiting for caches to sync for endpoint
	I0526 21:25:11.033208  527485 command_runner.go:124] ! I0526 21:23:40.288607       1 controllermanager.go:554] Started "podgc"
	I0526 21:25:11.033221  527485 command_runner.go:124] ! I0526 21:23:40.288827       1 gc_controller.go:89] Starting GC controller
	I0526 21:25:11.033240  527485 command_runner.go:124] ! I0526 21:23:40.289411       1 shared_informer.go:240] Waiting for caches to sync for GC
	I0526 21:25:11.033265  527485 command_runner.go:124] ! W0526 21:23:40.988861       1 shared_informer.go:494] resyncPeriod 13h30m7.5724073s is smaller than resyncCheckPeriod 19h40m47.70464655s and the informer has already started. Changing it to 19h40m47.70464655s
	I0526 21:25:11.033283  527485 command_runner.go:124] ! I0526 21:23:40.989960       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for serviceaccounts
	I0526 21:25:11.033302  527485 command_runner.go:124] ! I0526 21:23:40.990215       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for daemonsets.apps
	I0526 21:25:11.033319  527485 command_runner.go:124] ! I0526 21:23:40.990426       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for cronjobs.batch
	I0526 21:25:11.033338  527485 command_runner.go:124] ! I0526 21:23:40.990971       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for rolebindings.rbac.authorization.k8s.io
	I0526 21:25:11.033358  527485 command_runner.go:124] ! I0526 21:23:40.991569       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for horizontalpodautoscalers.autoscaling
	I0526 21:25:11.033378  527485 command_runner.go:124] ! I0526 21:23:40.991963       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for poddisruptionbudgets.policy
	I0526 21:25:11.033396  527485 command_runner.go:124] ! I0526 21:23:40.992141       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for jobs.batch
	I0526 21:25:11.033415  527485 command_runner.go:124] ! I0526 21:23:40.992301       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for endpointslices.discovery.k8s.io
	I0526 21:25:11.033430  527485 command_runner.go:124] ! I0526 21:23:40.992532       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for podtemplates
	I0526 21:25:11.033455  527485 command_runner.go:124] ! W0526 21:23:40.992690       1 shared_informer.go:494] resyncPeriod 13h37m25.694603534s is smaller than resyncCheckPeriod 19h40m47.70464655s and the informer has already started. Changing it to 19h40m47.70464655s
	I0526 21:25:11.033474  527485 command_runner.go:124] ! I0526 21:23:40.993075       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for controllerrevisions.apps
	I0526 21:25:11.033494  527485 command_runner.go:124] ! I0526 21:23:40.993243       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for networkpolicies.networking.k8s.io
	I0526 21:25:11.033512  527485 command_runner.go:124] ! I0526 21:23:40.993580       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for limitranges
	I0526 21:25:11.033528  527485 command_runner.go:124] ! I0526 21:23:40.993747       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for ingresses.networking.k8s.io
	I0526 21:25:11.033547  527485 command_runner.go:124] ! I0526 21:23:40.993780       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for ingresses.extensions
	I0526 21:25:11.033565  527485 command_runner.go:124] ! I0526 21:23:40.993805       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for leases.coordination.k8s.io
	I0526 21:25:11.033584  527485 command_runner.go:124] ! I0526 21:23:40.993841       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for statefulsets.apps
	I0526 21:25:11.033603  527485 command_runner.go:124] ! I0526 21:23:40.993861       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for replicasets.apps
	I0526 21:25:11.033619  527485 command_runner.go:124] ! I0526 21:23:40.993876       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for deployments.apps
	I0526 21:25:11.033636  527485 command_runner.go:124] ! I0526 21:23:40.993891       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for endpoints
	I0526 21:25:11.033657  527485 command_runner.go:124] ! I0526 21:23:40.993951       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for events.events.k8s.io
	I0526 21:25:11.033676  527485 command_runner.go:124] ! I0526 21:23:40.993980       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for roles.rbac.authorization.k8s.io
	I0526 21:25:11.033693  527485 command_runner.go:124] ! I0526 21:23:40.994082       1 controllermanager.go:554] Started "resourcequota"
	I0526 21:25:11.033717  527485 command_runner.go:124] ! I0526 21:23:40.994178       1 resource_quota_controller.go:273] Starting resource quota controller
	I0526 21:25:11.033734  527485 command_runner.go:124] ! I0526 21:23:40.994191       1 shared_informer.go:240] Waiting for caches to sync for resource quota
	I0526 21:25:11.033748  527485 command_runner.go:124] ! I0526 21:23:40.994219       1 resource_quota_monitor.go:304] QuotaMonitor running
	I0526 21:25:11.033765  527485 command_runner.go:124] ! I0526 21:23:41.028175       1 controllermanager.go:554] Started "namespace"
	I0526 21:25:11.033780  527485 command_runner.go:124] ! I0526 21:23:41.028716       1 namespace_controller.go:200] Starting namespace controller
	I0526 21:25:11.033793  527485 command_runner.go:124] ! I0526 21:23:41.028992       1 shared_informer.go:240] Waiting for caches to sync for namespace
	I0526 21:25:11.033804  527485 command_runner.go:124] ! I0526 21:23:41.051981       1 controllermanager.go:554] Started "ttl"
	I0526 21:25:11.033819  527485 command_runner.go:124] ! I0526 21:23:41.052926       1 ttl_controller.go:121] Starting TTL controller
	I0526 21:25:11.033833  527485 command_runner.go:124] ! I0526 21:23:41.053383       1 shared_informer.go:240] Waiting for caches to sync for TTL
	I0526 21:25:11.033847  527485 command_runner.go:124] ! I0526 21:23:41.289145       1 controllermanager.go:554] Started "attachdetach"
	I0526 21:25:11.033860  527485 command_runner.go:124] ! W0526 21:23:41.289246       1 controllermanager.go:546] Skipping "ttl-after-finished"
	I0526 21:25:11.033873  527485 command_runner.go:124] ! I0526 21:23:41.289282       1 attach_detach_controller.go:328] Starting attach detach controller
	I0526 21:25:11.033889  527485 command_runner.go:124] ! I0526 21:23:41.289291       1 shared_informer.go:240] Waiting for caches to sync for attach detach
	I0526 21:25:11.033904  527485 command_runner.go:124] ! I0526 21:23:41.537362       1 controllermanager.go:554] Started "serviceaccount"
	I0526 21:25:11.033918  527485 command_runner.go:124] ! I0526 21:23:41.537403       1 serviceaccounts_controller.go:117] Starting service account controller
	I0526 21:25:11.033933  527485 command_runner.go:124] ! I0526 21:23:41.538137       1 shared_informer.go:240] Waiting for caches to sync for service account
	I0526 21:25:11.033946  527485 command_runner.go:124] ! I0526 21:23:41.787243       1 controllermanager.go:554] Started "deployment"
	I0526 21:25:11.033959  527485 command_runner.go:124] ! I0526 21:23:41.788023       1 deployment_controller.go:153] Starting deployment controller
	I0526 21:25:11.033974  527485 command_runner.go:124] ! I0526 21:23:41.790417       1 shared_informer.go:240] Waiting for caches to sync for deployment
	I0526 21:25:11.033986  527485 command_runner.go:124] ! I0526 21:23:41.936235       1 controllermanager.go:554] Started "csrapproving"
	I0526 21:25:11.034003  527485 command_runner.go:124] ! I0526 21:23:41.936293       1 certificate_controller.go:118] Starting certificate controller "csrapproving"
	I0526 21:25:11.034025  527485 command_runner.go:124] ! I0526 21:23:41.936301       1 shared_informer.go:240] Waiting for caches to sync for certificate-csrapproving
	I0526 21:25:11.034044  527485 command_runner.go:124] ! I0526 21:23:42.137381       1 request.go:655] Throttling request took 1.048213324s, request: GET:https://192.168.39.229:8443/apis/extensions/v1beta1?timeout=32s
	I0526 21:25:11.034059  527485 command_runner.go:124] ! I0526 21:23:42.189224       1 node_ipam_controller.go:91] Sending events to api server.
	I0526 21:25:11.034077  527485 command_runner.go:124] ! I0526 21:23:52.210125       1 range_allocator.go:82] Sending events to api server.
	I0526 21:25:11.034096  527485 command_runner.go:124] ! I0526 21:23:52.211056       1 range_allocator.go:116] No Secondary Service CIDR provided. Skipping filtering out secondary service addresses.
	I0526 21:25:11.034110  527485 command_runner.go:124] ! I0526 21:23:52.211333       1 controllermanager.go:554] Started "nodeipam"
	I0526 21:25:11.034127  527485 command_runner.go:124] ! W0526 21:23:52.211708       1 core.go:246] configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes.
	I0526 21:25:11.034141  527485 command_runner.go:124] ! W0526 21:23:52.212021       1 controllermanager.go:546] Skipping "route"
	I0526 21:25:11.034153  527485 command_runner.go:124] ! I0526 21:23:52.212292       1 node_ipam_controller.go:159] Starting ipam controller
	I0526 21:25:11.034168  527485 command_runner.go:124] ! I0526 21:23:52.212876       1 shared_informer.go:240] Waiting for caches to sync for node
	I0526 21:25:11.034183  527485 command_runner.go:124] ! I0526 21:23:52.227871       1 node_lifecycle_controller.go:77] Sending events to api server
	I0526 21:25:11.034198  527485 command_runner.go:124] ! E0526 21:23:52.227991       1 core.go:232] failed to start cloud node lifecycle controller: no cloud provider provided
	I0526 21:25:11.034213  527485 command_runner.go:124] ! W0526 21:23:52.228003       1 controllermanager.go:546] Skipping "cloud-node-lifecycle"
	I0526 21:25:11.034227  527485 command_runner.go:124] ! I0526 21:23:52.257128       1 controllermanager.go:554] Started "persistentvolume-expander"
	I0526 21:25:11.034240  527485 command_runner.go:124] ! I0526 21:23:52.257967       1 expand_controller.go:310] Starting expand controller
	I0526 21:25:11.034253  527485 command_runner.go:124] ! I0526 21:23:52.258344       1 shared_informer.go:240] Waiting for caches to sync for expand
	I0526 21:25:11.034267  527485 command_runner.go:124] ! I0526 21:23:52.287731       1 controllermanager.go:554] Started "endpointslice"
	I0526 21:25:11.034282  527485 command_runner.go:124] ! I0526 21:23:52.287941       1 endpointslice_controller.go:237] Starting endpoint slice controller
	I0526 21:25:11.034298  527485 command_runner.go:124] ! I0526 21:23:52.287950       1 shared_informer.go:240] Waiting for caches to sync for endpoint_slice
	I0526 21:25:11.034311  527485 command_runner.go:124] ! I0526 21:23:52.334629       1 controllermanager.go:554] Started "daemonset"
	I0526 21:25:11.034323  527485 command_runner.go:124] ! I0526 21:23:52.334789       1 daemon_controller.go:285] Starting daemon sets controller
	I0526 21:25:11.034339  527485 command_runner.go:124] ! I0526 21:23:52.334797       1 shared_informer.go:240] Waiting for caches to sync for daemon sets
	I0526 21:25:11.034357  527485 command_runner.go:124] ! I0526 21:23:52.366633       1 controllermanager.go:554] Started "statefulset"
	I0526 21:25:11.034372  527485 command_runner.go:124] ! I0526 21:23:52.366920       1 stateful_set.go:146] Starting stateful set controller
	I0526 21:25:11.034389  527485 command_runner.go:124] ! I0526 21:23:52.367009       1 shared_informer.go:240] Waiting for caches to sync for stateful set
	I0526 21:25:11.034402  527485 command_runner.go:124] ! I0526 21:23:52.395670       1 controllermanager.go:554] Started "cronjob"
	I0526 21:25:11.034414  527485 command_runner.go:124] ! I0526 21:23:52.395842       1 cronjob_controller.go:96] Starting CronJob Manager
	I0526 21:25:11.034431  527485 command_runner.go:124] ! I0526 21:23:52.416080       1 certificate_controller.go:118] Starting certificate controller "csrsigning-kubelet-serving"
	I0526 21:25:11.034448  527485 command_runner.go:124] ! I0526 21:23:52.416256       1 shared_informer.go:240] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0526 21:25:11.034469  527485 command_runner.go:124] ! I0526 21:23:52.416385       1 dynamic_serving_content.go:130] Starting csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key
	I0526 21:25:11.034689  527485 command_runner.go:124] ! I0526 21:23:52.416862       1 certificate_controller.go:118] Starting certificate controller "csrsigning-kubelet-client"
	I0526 21:25:11.034710  527485 command_runner.go:124] ! I0526 21:23:52.416958       1 shared_informer.go:240] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0526 21:25:11.034726  527485 command_runner.go:124] ! I0526 21:23:52.416975       1 dynamic_serving_content.go:130] Starting csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key
	I0526 21:25:11.034741  527485 command_runner.go:124] ! I0526 21:23:52.417715       1 certificate_controller.go:118] Starting certificate controller "csrsigning-kube-apiserver-client"
	I0526 21:25:11.034765  527485 command_runner.go:124] ! I0526 21:23:52.417882       1 shared_informer.go:240] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0526 21:25:11.034785  527485 command_runner.go:124] ! I0526 21:23:52.418025       1 dynamic_serving_content.go:130] Starting csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key
	I0526 21:25:11.034800  527485 command_runner.go:124] ! I0526 21:23:52.418373       1 controllermanager.go:554] Started "csrsigning"
	I0526 21:25:11.034816  527485 command_runner.go:124] ! I0526 21:23:52.418419       1 certificate_controller.go:118] Starting certificate controller "csrsigning-legacy-unknown"
	I0526 21:25:11.034835  527485 command_runner.go:124] ! I0526 21:23:52.418799       1 dynamic_serving_content.go:130] Starting csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key
	I0526 21:25:11.034851  527485 command_runner.go:124] ! I0526 21:23:52.418805       1 shared_informer.go:240] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0526 21:25:11.034865  527485 command_runner.go:124] ! I0526 21:23:52.515732       1 controllermanager.go:554] Started "bootstrapsigner"
	I0526 21:25:11.034880  527485 command_runner.go:124] ! I0526 21:23:52.516431       1 shared_informer.go:240] Waiting for caches to sync for bootstrap_signer
	I0526 21:25:11.034896  527485 command_runner.go:124] ! I0526 21:23:52.765741       1 controllermanager.go:554] Started "replicationcontroller"
	I0526 21:25:11.034914  527485 command_runner.go:124] ! I0526 21:23:52.765769       1 replica_set.go:182] Starting replicationcontroller controller
	I0526 21:25:11.034931  527485 command_runner.go:124] ! I0526 21:23:52.765867       1 shared_informer.go:240] Waiting for caches to sync for ReplicationController
	I0526 21:25:11.034944  527485 command_runner.go:124] ! I0526 21:23:52.915756       1 node_lifecycle_controller.go:380] Sending events to api server.
	I0526 21:25:11.034958  527485 command_runner.go:124] ! I0526 21:23:52.916150       1 taint_manager.go:163] Sending events to api server.
	I0526 21:25:11.034972  527485 command_runner.go:124] ! I0526 21:23:52.916342       1 node_lifecycle_controller.go:508] Controller will reconcile labels.
	I0526 21:25:11.034987  527485 command_runner.go:124] ! I0526 21:23:52.916386       1 controllermanager.go:554] Started "nodelifecycle"
	I0526 21:25:11.035002  527485 command_runner.go:124] ! I0526 21:23:52.916749       1 node_lifecycle_controller.go:542] Starting node controller
	I0526 21:25:11.035017  527485 command_runner.go:124] ! I0526 21:23:52.916921       1 shared_informer.go:240] Waiting for caches to sync for taint
	I0526 21:25:11.035030  527485 command_runner.go:124] ! I0526 21:23:53.165965       1 controllermanager.go:554] Started "job"
	I0526 21:25:11.035042  527485 command_runner.go:124] ! I0526 21:23:53.166025       1 job_controller.go:148] Starting job controller
	I0526 21:25:11.035057  527485 command_runner.go:124] ! I0526 21:23:53.167211       1 shared_informer.go:240] Waiting for caches to sync for job
	I0526 21:25:11.035073  527485 command_runner.go:124] ! I0526 21:23:53.170385       1 shared_informer.go:240] Waiting for caches to sync for resource quota
	I0526 21:25:11.035099  527485 command_runner.go:124] ! W0526 21:23:53.178965       1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-20210526212238-510955" does not exist
	I0526 21:25:11.035117  527485 command_runner.go:124] ! I0526 21:23:53.213010       1 shared_informer.go:247] Caches are synced for node 
	I0526 21:25:11.035130  527485 command_runner.go:124] ! I0526 21:23:53.213735       1 range_allocator.go:172] Starting range CIDR allocator
	I0526 21:25:11.035145  527485 command_runner.go:124] ! I0526 21:23:53.214071       1 shared_informer.go:240] Waiting for caches to sync for cidrallocator
	I0526 21:25:11.035159  527485 command_runner.go:124] ! I0526 21:23:53.214233       1 shared_informer.go:247] Caches are synced for cidrallocator 
	I0526 21:25:11.035180  527485 command_runner.go:124] ! I0526 21:23:53.215982       1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring 
	I0526 21:25:11.035197  527485 command_runner.go:124] ! I0526 21:23:53.216587       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-serving 
	I0526 21:25:11.035213  527485 command_runner.go:124] ! I0526 21:23:53.217085       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-client 
	I0526 21:25:11.035248  527485 command_runner.go:124] ! I0526 21:23:53.217522       1 shared_informer.go:247] Caches are synced for bootstrap_signer 
	I0526 21:25:11.035267  527485 command_runner.go:124] ! I0526 21:23:53.218215       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kube-apiserver-client 
	I0526 21:25:11.035282  527485 command_runner.go:124] ! I0526 21:23:53.218891       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-legacy-unknown 
	I0526 21:25:11.035298  527485 command_runner.go:124] ! I0526 21:23:53.229560       1 shared_informer.go:247] Caches are synced for namespace 
	I0526 21:25:11.035313  527485 command_runner.go:124] ! I0526 21:23:53.235029       1 shared_informer.go:247] Caches are synced for daemon sets 
	I0526 21:25:11.035327  527485 command_runner.go:124] ! I0526 21:23:53.238654       1 shared_informer.go:247] Caches are synced for service account 
	I0526 21:25:11.035341  527485 command_runner.go:124] ! I0526 21:23:53.240824       1 shared_informer.go:247] Caches are synced for endpoint 
	I0526 21:25:11.035357  527485 command_runner.go:124] ! I0526 21:23:53.247379       1 shared_informer.go:247] Caches are synced for certificate-csrapproving 
	I0526 21:25:11.035373  527485 command_runner.go:124] ! I0526 21:23:53.251558       1 shared_informer.go:247] Caches are synced for PVC protection 
	I0526 21:25:11.035386  527485 command_runner.go:124] ! I0526 21:23:53.252699       1 shared_informer.go:247] Caches are synced for ReplicaSet 
	I0526 21:25:11.035397  527485 command_runner.go:124] ! I0526 21:23:53.256544       1 shared_informer.go:247] Caches are synced for TTL 
	I0526 21:25:11.035414  527485 command_runner.go:124] ! I0526 21:23:53.265652       1 range_allocator.go:373] Set node multinode-20210526212238-510955 PodCIDR to [10.244.0.0/24]
	I0526 21:25:11.035429  527485 command_runner.go:124] ! I0526 21:23:53.268627       1 shared_informer.go:247] Caches are synced for job 
	I0526 21:25:11.035444  527485 command_runner.go:124] ! I0526 21:23:53.268752       1 shared_informer.go:247] Caches are synced for stateful set 
	I0526 21:25:11.035458  527485 command_runner.go:124] ! I0526 21:23:53.290037       1 shared_informer.go:247] Caches are synced for crt configmap 
	I0526 21:25:11.035473  527485 command_runner.go:124] ! I0526 21:23:53.290226       1 shared_informer.go:247] Caches are synced for endpoint_slice 
	I0526 21:25:11.035486  527485 command_runner.go:124] ! I0526 21:23:53.292847       1 shared_informer.go:247] Caches are synced for deployment 
	I0526 21:25:11.035500  527485 command_runner.go:124] ! I0526 21:23:53.293728       1 shared_informer.go:247] Caches are synced for GC 
	I0526 21:25:11.035513  527485 command_runner.go:124] ! I0526 21:23:53.293879       1 shared_informer.go:247] Caches are synced for HPA 
	I0526 21:25:11.035529  527485 command_runner.go:124] ! I0526 21:23:53.293974       1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator 
	I0526 21:25:11.035544  527485 command_runner.go:124] ! I0526 21:23:53.317816       1 shared_informer.go:247] Caches are synced for taint 
	I0526 21:25:11.035557  527485 command_runner.go:124] ! I0526 21:23:53.317927       1 node_lifecycle_controller.go:1429] Initializing eviction metric for zone: 
	I0526 21:25:11.035574  527485 command_runner.go:124] ! W0526 21:23:53.318278       1 node_lifecycle_controller.go:1044] Missing timestamp for Node multinode-20210526212238-510955. Assuming now as a timestamp.
	I0526 21:25:11.035593  527485 command_runner.go:124] ! I0526 21:23:53.318396       1 node_lifecycle_controller.go:1195] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
	I0526 21:25:11.035608  527485 command_runner.go:124] ! I0526 21:23:53.318775       1 taint_manager.go:187] Starting NoExecuteTaintManager
	I0526 21:25:11.035637  527485 command_runner.go:124] ! I0526 21:23:53.319750       1 event.go:291] "Event occurred" object="multinode-20210526212238-510955" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-20210526212238-510955 event: Registered Node multinode-20210526212238-510955 in Controller"
	I0526 21:25:11.035664  527485 command_runner.go:124] ! I0526 21:23:53.337883       1 shared_informer.go:247] Caches are synced for disruption 
	I0526 21:25:11.035677  527485 command_runner.go:124] ! I0526 21:23:53.337896       1 disruption.go:339] Sending events to api server.
	I0526 21:25:11.035693  527485 command_runner.go:124] ! I0526 21:23:53.368948       1 shared_informer.go:247] Caches are synced for ReplicationController 
	I0526 21:25:11.035715  527485 command_runner.go:124] ! I0526 21:23:53.431193       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-2wgbs"
	I0526 21:25:11.035740  527485 command_runner.go:124] ! I0526 21:23:53.431223       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-74ff55c5b to 2"
	I0526 21:25:11.035755  527485 command_runner.go:124] ! I0526 21:23:53.459736       1 shared_informer.go:247] Caches are synced for expand 
	I0526 21:25:11.035773  527485 command_runner.go:124] ! I0526 21:23:53.479631       1 shared_informer.go:247] Caches are synced for resource quota 
	I0526 21:25:11.035789  527485 command_runner.go:124] ! I0526 21:23:53.487838       1 shared_informer.go:247] Caches are synced for PV protection 
	I0526 21:25:11.035802  527485 command_runner.go:124] ! I0526 21:23:53.489356       1 shared_informer.go:247] Caches are synced for attach detach 
	I0526 21:25:11.035818  527485 command_runner.go:124] ! I0526 21:23:53.494672       1 shared_informer.go:247] Caches are synced for resource quota 
	I0526 21:25:11.035834  527485 command_runner.go:124] ! I0526 21:23:53.539359       1 shared_informer.go:247] Caches are synced for persistent volume 
	I0526 21:25:11.035858  527485 command_runner.go:124] ! I0526 21:23:53.545401       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-qbl42"
	I0526 21:25:11.035883  527485 command_runner.go:124] ! I0526 21:23:53.545422       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-z56bv"
	I0526 21:25:11.035907  527485 command_runner.go:124] ! I0526 21:23:53.556102       1 event.go:291] "Event occurred" object="kube-system/kube-apiserver-multinode-20210526212238-510955" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0526 21:25:11.035932  527485 command_runner.go:124] ! I0526 21:23:53.567036       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-tw67b"
	I0526 21:25:11.035960  527485 command_runner.go:124] ! E0526 21:23:53.635384       1 clusterroleaggregation_controller.go:181] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
	I0526 21:25:11.035977  527485 command_runner.go:124] ! I0526 21:23:53.689947       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	I0526 21:25:11.036002  527485 command_runner.go:124] ! I0526 21:23:53.733785       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-74ff55c5b to 1"
	I0526 21:25:11.036029  527485 command_runner.go:124] ! I0526 21:23:53.758013       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-74ff55c5b-z56bv"
	I0526 21:25:11.036045  527485 command_runner.go:124] ! I0526 21:23:53.906201       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0526 21:25:11.036058  527485 command_runner.go:124] ! I0526 21:23:53.937294       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0526 21:25:11.036078  527485 command_runner.go:124] ! I0526 21:23:53.937309       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0526 21:25:11.036097  527485 command_runner.go:124] ! I0526 21:24:08.320331       1 node_lifecycle_controller.go:1222] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I0526 21:25:11.045061  527485 logs.go:123] Gathering logs for container status ...
	I0526 21:25:11.045080  527485 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0526 21:25:11.070698  527485 command_runner.go:124] > CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	I0526 21:25:11.070728  527485 command_runner.go:124] > a9593dff4428d       bfe3a36ebd252       About a minute ago   Running             coredns                   0                   1d96eb581f035
	I0526 21:25:11.070742  527485 command_runner.go:124] > 5d3df8c94eaed       6e38f40d628db       About a minute ago   Running             storage-provisioner       0                   722b1b257c571
	I0526 21:25:11.070769  527485 command_runner.go:124] > 69df1859ce4d1       6de166512aa22       About a minute ago   Running             kindnet-cni               0                   53490c652b9e5
	I0526 21:25:11.070809  527485 command_runner.go:124] > de6efc6fec4b2       43154ddb57a83       About a minute ago   Running             kube-proxy                0                   038c42970362d
	I0526 21:25:11.070830  527485 command_runner.go:124] > c8538106e966b       0369cf4303ffd       About a minute ago   Running             etcd                      0                   2ad404c6a9c44
	I0526 21:25:11.070846  527485 command_runner.go:124] > e6bb9bee7539a       ed2c44fbdd78b       About a minute ago   Running             kube-scheduler            0                   24fd8b8599a6e
	I0526 21:25:11.070870  527485 command_runner.go:124] > 2314e41b1b443       a27166429d98e       About a minute ago   Running             kube-controller-manager   0                   73ada73fbbf0b
	I0526 21:25:11.070910  527485 command_runner.go:124] > a0581c0e5409b       a8c2fdb8bf76e       About a minute ago   Running             kube-apiserver            0                   fe43674906f20
	I0526 21:25:13.573645  527485 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0526 21:25:13.584618  527485 command_runner.go:124] > 2592
	I0526 21:25:13.584780  527485 api_server.go:70] duration metric: took 1m18.776650446s to wait for apiserver process to appear ...
	I0526 21:25:13.584801  527485 api_server.go:86] waiting for apiserver healthz status ...
	I0526 21:25:13.584833  527485 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0526 21:25:13.584911  527485 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0526 21:25:13.604799  527485 command_runner.go:124] > a0581c0e5409bf38a71c61d4cf776a7fcd3dc38c24a5a6a52165552afd2bc85c
	I0526 21:25:13.604832  527485 cri.go:76] found id: "a0581c0e5409bf38a71c61d4cf776a7fcd3dc38c24a5a6a52165552afd2bc85c"
	I0526 21:25:13.604841  527485 cri.go:76] found id: ""
	I0526 21:25:13.604848  527485 logs.go:270] 1 containers: [a0581c0e5409bf38a71c61d4cf776a7fcd3dc38c24a5a6a52165552afd2bc85c]
	I0526 21:25:13.604907  527485 ssh_runner.go:149] Run: which crictl
	I0526 21:25:13.608770  527485 command_runner.go:124] > /bin/crictl
	I0526 21:25:13.608906  527485 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0526 21:25:13.608984  527485 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=etcd
	I0526 21:25:13.628454  527485 command_runner.go:124] > c8538106e966bebc2d596c1688fdb11c413a37e122bd18c454b4db5bec7f55ad
	I0526 21:25:13.628498  527485 cri.go:76] found id: "c8538106e966bebc2d596c1688fdb11c413a37e122bd18c454b4db5bec7f55ad"
	I0526 21:25:13.628507  527485 cri.go:76] found id: ""
	I0526 21:25:13.628514  527485 logs.go:270] 1 containers: [c8538106e966bebc2d596c1688fdb11c413a37e122bd18c454b4db5bec7f55ad]
	I0526 21:25:13.628558  527485 ssh_runner.go:149] Run: which crictl
	I0526 21:25:13.632594  527485 command_runner.go:124] > /bin/crictl
	I0526 21:25:13.632662  527485 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0526 21:25:13.632717  527485 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=coredns
	I0526 21:25:13.651568  527485 command_runner.go:124] > a9593dff4428d4d0f5cc31832cefb42bf39b62a98d32d295dd9502a1ef6d307a
	I0526 21:25:13.651683  527485 cri.go:76] found id: "a9593dff4428d4d0f5cc31832cefb42bf39b62a98d32d295dd9502a1ef6d307a"
	I0526 21:25:13.651700  527485 cri.go:76] found id: ""
	I0526 21:25:13.651707  527485 logs.go:270] 1 containers: [a9593dff4428d4d0f5cc31832cefb42bf39b62a98d32d295dd9502a1ef6d307a]
	I0526 21:25:13.651744  527485 ssh_runner.go:149] Run: which crictl
	I0526 21:25:13.655613  527485 command_runner.go:124] > /bin/crictl
	I0526 21:25:13.655703  527485 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0526 21:25:13.655747  527485 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0526 21:25:13.675717  527485 command_runner.go:124] > e6bb9bee7539aa8f41eb665706dd077c17b8c28a54090cf4fffe439978194c08
	I0526 21:25:13.675821  527485 cri.go:76] found id: "e6bb9bee7539aa8f41eb665706dd077c17b8c28a54090cf4fffe439978194c08"
	I0526 21:25:13.675836  527485 cri.go:76] found id: ""
	I0526 21:25:13.675841  527485 logs.go:270] 1 containers: [e6bb9bee7539aa8f41eb665706dd077c17b8c28a54090cf4fffe439978194c08]
	I0526 21:25:13.675871  527485 ssh_runner.go:149] Run: which crictl
	I0526 21:25:13.679599  527485 command_runner.go:124] > /bin/crictl
	I0526 21:25:13.679693  527485 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0526 21:25:13.679727  527485 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0526 21:25:13.699794  527485 command_runner.go:124] > de6efc6fec4b20903efb849817a9e57e284841a9fc6ef9bd1b83e563316cdaa2
	I0526 21:25:13.699810  527485 cri.go:76] found id: "de6efc6fec4b20903efb849817a9e57e284841a9fc6ef9bd1b83e563316cdaa2"
	I0526 21:25:13.699821  527485 cri.go:76] found id: ""
	I0526 21:25:13.699825  527485 logs.go:270] 1 containers: [de6efc6fec4b20903efb849817a9e57e284841a9fc6ef9bd1b83e563316cdaa2]
	I0526 21:25:13.699852  527485 ssh_runner.go:149] Run: which crictl
	I0526 21:25:13.703344  527485 command_runner.go:124] > /bin/crictl
	I0526 21:25:13.703637  527485 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0526 21:25:13.703683  527485 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0526 21:25:13.725683  527485 cri.go:76] found id: ""
	I0526 21:25:13.725697  527485 logs.go:270] 0 containers: []
	W0526 21:25:13.725702  527485 logs.go:272] No container was found matching "kubernetes-dashboard"
	I0526 21:25:13.725707  527485 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0526 21:25:13.725739  527485 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0526 21:25:13.743433  527485 command_runner.go:124] > 5d3df8c94eaedafbc7bb1e79012cbc72bc2ec88687f4bfc6f10fff8c1eb3107d
	I0526 21:25:13.743475  527485 cri.go:76] found id: "5d3df8c94eaedafbc7bb1e79012cbc72bc2ec88687f4bfc6f10fff8c1eb3107d"
	I0526 21:25:13.743486  527485 cri.go:76] found id: ""
	I0526 21:25:13.743493  527485 logs.go:270] 1 containers: [5d3df8c94eaedafbc7bb1e79012cbc72bc2ec88687f4bfc6f10fff8c1eb3107d]
	I0526 21:25:13.743519  527485 ssh_runner.go:149] Run: which crictl
	I0526 21:25:13.747149  527485 command_runner.go:124] > /bin/crictl
	I0526 21:25:13.747433  527485 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0526 21:25:13.747479  527485 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0526 21:25:13.765103  527485 command_runner.go:124] > 2314e41b1b44395e7b5a7c32d6859941d4a5db601171f0dd18ddefb571d25f18
	I0526 21:25:13.766695  527485 cri.go:76] found id: "2314e41b1b44395e7b5a7c32d6859941d4a5db601171f0dd18ddefb571d25f18"
	I0526 21:25:13.766712  527485 cri.go:76] found id: ""
	I0526 21:25:13.766719  527485 logs.go:270] 1 containers: [2314e41b1b44395e7b5a7c32d6859941d4a5db601171f0dd18ddefb571d25f18]
	I0526 21:25:13.766751  527485 ssh_runner.go:149] Run: which crictl
	I0526 21:25:13.770984  527485 command_runner.go:124] > /bin/crictl
	I0526 21:25:13.771048  527485 logs.go:123] Gathering logs for describe nodes ...
	I0526 21:25:13.771072  527485 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0526 21:25:13.911602  527485 command_runner.go:124] > Name:               multinode-20210526212238-510955
	I0526 21:25:13.911625  527485 command_runner.go:124] > Roles:              control-plane,master
	I0526 21:25:13.911633  527485 command_runner.go:124] > Labels:             beta.kubernetes.io/arch=amd64
	I0526 21:25:13.911641  527485 command_runner.go:124] >                     beta.kubernetes.io/os=linux
	I0526 21:25:13.911658  527485 command_runner.go:124] >                     kubernetes.io/arch=amd64
	I0526 21:25:13.911678  527485 command_runner.go:124] >                     kubernetes.io/hostname=multinode-20210526212238-510955
	I0526 21:25:13.911691  527485 command_runner.go:124] >                     kubernetes.io/os=linux
	I0526 21:25:13.911705  527485 command_runner.go:124] >                     minikube.k8s.io/commit=1440f8d7119ca73787e7dc88324b0d13449454ff
	I0526 21:25:13.911715  527485 command_runner.go:124] >                     minikube.k8s.io/name=multinode-20210526212238-510955
	I0526 21:25:13.911725  527485 command_runner.go:124] >                     minikube.k8s.io/updated_at=2021_05_26T21_23_38_0700
	I0526 21:25:13.911735  527485 command_runner.go:124] >                     minikube.k8s.io/version=v1.20.0
	I0526 21:25:13.911743  527485 command_runner.go:124] >                     node-role.kubernetes.io/control-plane=
	I0526 21:25:13.911751  527485 command_runner.go:124] >                     node-role.kubernetes.io/master=
	I0526 21:25:13.911775  527485 command_runner.go:124] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	I0526 21:25:13.911789  527485 command_runner.go:124] >                     node.alpha.kubernetes.io/ttl: 0
	I0526 21:25:13.911798  527485 command_runner.go:124] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0526 21:25:13.911810  527485 command_runner.go:124] > CreationTimestamp:  Wed, 26 May 2021 21:23:34 +0000
	I0526 21:25:13.911828  527485 command_runner.go:124] > Taints:             <none>
	I0526 21:25:13.911837  527485 command_runner.go:124] > Unschedulable:      false
	I0526 21:25:13.911840  527485 command_runner.go:124] > Lease:
	I0526 21:25:13.911852  527485 command_runner.go:124] >   HolderIdentity:  multinode-20210526212238-510955
	I0526 21:25:13.911864  527485 command_runner.go:124] >   AcquireTime:     <unset>
	I0526 21:25:13.911875  527485 command_runner.go:124] >   RenewTime:       Wed, 26 May 2021 21:25:04 +0000
	I0526 21:25:13.911886  527485 command_runner.go:124] > Conditions:
	I0526 21:25:13.911901  527485 command_runner.go:124] >   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	I0526 21:25:13.911916  527485 command_runner.go:124] >   ----             ------  -----------------                 ------------------                ------                       -------
	I0526 21:25:13.911930  527485 command_runner.go:124] >   MemoryPressure   False   Wed, 26 May 2021 21:24:14 +0000   Wed, 26 May 2021 21:23:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	I0526 21:25:13.911958  527485 command_runner.go:124] >   DiskPressure     False   Wed, 26 May 2021 21:24:14 +0000   Wed, 26 May 2021 21:23:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	I0526 21:25:13.911980  527485 command_runner.go:124] >   PIDPressure      False   Wed, 26 May 2021 21:24:14 +0000   Wed, 26 May 2021 21:23:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	I0526 21:25:13.912001  527485 command_runner.go:124] >   Ready            True    Wed, 26 May 2021 21:24:14 +0000   Wed, 26 May 2021 21:24:04 +0000   KubeletReady                 kubelet is posting ready status
	I0526 21:25:13.912009  527485 command_runner.go:124] > Addresses:
	I0526 21:25:13.912014  527485 command_runner.go:124] >   InternalIP:  192.168.39.229
	I0526 21:25:13.912025  527485 command_runner.go:124] >   Hostname:    multinode-20210526212238-510955
	I0526 21:25:13.912035  527485 command_runner.go:124] > Capacity:
	I0526 21:25:13.912043  527485 command_runner.go:124] >   cpu:                2
	I0526 21:25:13.912056  527485 command_runner.go:124] >   ephemeral-storage:  17784752Ki
	I0526 21:25:13.912068  527485 command_runner.go:124] >   hugepages-2Mi:      0
	I0526 21:25:13.912080  527485 command_runner.go:124] >   memory:             2186320Ki
	I0526 21:25:13.912091  527485 command_runner.go:124] >   pods:               110
	I0526 21:25:13.912101  527485 command_runner.go:124] > Allocatable:
	I0526 21:25:13.912106  527485 command_runner.go:124] >   cpu:                2
	I0526 21:25:13.912116  527485 command_runner.go:124] >   ephemeral-storage:  17784752Ki
	I0526 21:25:13.912126  527485 command_runner.go:124] >   hugepages-2Mi:      0
	I0526 21:25:13.912138  527485 command_runner.go:124] >   memory:             2186320Ki
	I0526 21:25:13.912150  527485 command_runner.go:124] >   pods:               110
	I0526 21:25:13.912160  527485 command_runner.go:124] > System Info:
	I0526 21:25:13.912172  527485 command_runner.go:124] >   Machine ID:                 fbd77f9e2b0d4ce7860fb21881bb7ff3
	I0526 21:25:13.912186  527485 command_runner.go:124] >   System UUID:                fbd77f9e-2b0d-4ce7-860f-b21881bb7ff3
	I0526 21:25:13.912199  527485 command_runner.go:124] >   Boot ID:                    9a60591c-de07-4474-bb32-101b0a9643ff
	I0526 21:25:13.912211  527485 command_runner.go:124] >   Kernel Version:             4.19.182
	I0526 21:25:13.912224  527485 command_runner.go:124] >   OS Image:                   Buildroot 2020.02.12
	I0526 21:25:13.912237  527485 command_runner.go:124] >   Operating System:           linux
	I0526 21:25:13.912247  527485 command_runner.go:124] >   Architecture:               amd64
	I0526 21:25:13.912259  527485 command_runner.go:124] >   Container Runtime Version:  containerd://1.4.4
	I0526 21:25:13.912271  527485 command_runner.go:124] >   Kubelet Version:            v1.20.2
	I0526 21:25:13.912281  527485 command_runner.go:124] >   Kube-Proxy Version:         v1.20.2
	I0526 21:25:13.912289  527485 command_runner.go:124] > PodCIDR:                      10.244.0.0/24
	I0526 21:25:13.912303  527485 command_runner.go:124] > PodCIDRs:                     10.244.0.0/24
	I0526 21:25:13.912316  527485 command_runner.go:124] > Non-terminated Pods:          (8 in total)
	I0526 21:25:13.912333  527485 command_runner.go:124] >   Namespace                   Name                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	I0526 21:25:13.912353  527485 command_runner.go:124] >   ---------                   ----                                                       ------------  ----------  ---------------  -------------  ---
	I0526 21:25:13.912369  527485 command_runner.go:124] >   kube-system                 coredns-74ff55c5b-tw67b                                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (7%!)(MISSING)     80s
	I0526 21:25:13.912388  527485 command_runner.go:124] >   kube-system                 etcd-multinode-20210526212238-510955                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         89s
	I0526 21:25:13.912409  527485 command_runner.go:124] >   kube-system                 kindnet-2wgbs                                              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      80s
	I0526 21:25:13.912429  527485 command_runner.go:124] >   kube-system                 kube-apiserver-multinode-20210526212238-510955             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         89s
	I0526 21:25:13.912481  527485 command_runner.go:124] >   kube-system                 kube-controller-manager-multinode-20210526212238-510955    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         89s
	I0526 21:25:13.912503  527485 command_runner.go:124] >   kube-system                 kube-proxy-qbl42                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         80s
	I0526 21:25:13.912520  527485 command_runner.go:124] >   kube-system                 kube-scheduler-multinode-20210526212238-510955             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         89s
	I0526 21:25:13.912538  527485 command_runner.go:124] >   kube-system                 storage-provisioner                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         78s
	I0526 21:25:13.912548  527485 command_runner.go:124] > Allocated resources:
	I0526 21:25:13.912560  527485 command_runner.go:124] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0526 21:25:13.912573  527485 command_runner.go:124] >   Resource           Requests     Limits
	I0526 21:25:13.912586  527485 command_runner.go:124] >   --------           --------     ------
	I0526 21:25:13.912598  527485 command_runner.go:124] >   cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	I0526 21:25:13.912611  527485 command_runner.go:124] >   memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	I0526 21:25:13.912625  527485 command_runner.go:124] >   ephemeral-storage  100Mi (0%!)(MISSING)   0 (0%!)(MISSING)
	I0526 21:25:13.912635  527485 command_runner.go:124] >   hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	I0526 21:25:13.912642  527485 command_runner.go:124] > Events:
	I0526 21:25:13.912657  527485 command_runner.go:124] >   Type    Reason                   Age                  From        Message
	I0526 21:25:13.912674  527485 command_runner.go:124] >   ----    ------                   ----                 ----        -------
	I0526 21:25:13.912690  527485 command_runner.go:124] >   Normal  Starting                 106s                 kubelet     Starting kubelet.
	I0526 21:25:13.912710  527485 command_runner.go:124] >   Normal  NodeHasSufficientMemory  105s (x4 over 106s)  kubelet     Node multinode-20210526212238-510955 status is now: NodeHasSufficientMemory
	I0526 21:25:13.912726  527485 command_runner.go:124] >   Normal  NodeHasNoDiskPressure    105s (x3 over 106s)  kubelet     Node multinode-20210526212238-510955 status is now: NodeHasNoDiskPressure
	I0526 21:25:13.912745  527485 command_runner.go:124] >   Normal  NodeHasSufficientPID     105s (x3 over 106s)  kubelet     Node multinode-20210526212238-510955 status is now: NodeHasSufficientPID
	I0526 21:25:13.912768  527485 command_runner.go:124] >   Normal  NodeAllocatableEnforced  105s                 kubelet     Updated Node Allocatable limit across pods
	I0526 21:25:13.912784  527485 command_runner.go:124] >   Normal  Starting                 90s                  kubelet     Starting kubelet.
	I0526 21:25:13.912801  527485 command_runner.go:124] >   Normal  NodeHasSufficientMemory  89s                  kubelet     Node multinode-20210526212238-510955 status is now: NodeHasSufficientMemory
	I0526 21:25:13.912820  527485 command_runner.go:124] >   Normal  NodeHasNoDiskPressure    89s                  kubelet     Node multinode-20210526212238-510955 status is now: NodeHasNoDiskPressure
	I0526 21:25:13.912840  527485 command_runner.go:124] >   Normal  NodeHasSufficientPID     89s                  kubelet     Node multinode-20210526212238-510955 status is now: NodeHasSufficientPID
	I0526 21:25:13.912858  527485 command_runner.go:124] >   Normal  NodeAllocatableEnforced  89s                  kubelet     Updated Node Allocatable limit across pods
	I0526 21:25:13.912890  527485 command_runner.go:124] >   Normal  Starting                 79s                  kube-proxy  Starting kube-proxy.
	I0526 21:25:13.912903  527485 command_runner.go:124] >   Normal  NodeReady                69s                  kubelet     Node multinode-20210526212238-510955 status is now: NodeReady
	I0526 21:25:13.919030  527485 logs.go:123] Gathering logs for etcd [c8538106e966bebc2d596c1688fdb11c413a37e122bd18c454b4db5bec7f55ad] ...
	I0526 21:25:13.919056  527485 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 c8538106e966bebc2d596c1688fdb11c413a37e122bd18c454b4db5bec7f55ad"
	I0526 21:25:13.939979  527485 command_runner.go:124] ! [WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead
	I0526 21:25:13.940392  527485 command_runner.go:124] ! 2021-05-26 21:23:30.145280 I | etcdmain: etcd Version: 3.4.13
	I0526 21:25:13.940467  527485 command_runner.go:124] ! 2021-05-26 21:23:30.146007 I | etcdmain: Git SHA: ae9734ed2
	I0526 21:25:13.940997  527485 command_runner.go:124] ! 2021-05-26 21:23:30.146359 I | etcdmain: Go Version: go1.12.17
	I0526 21:25:13.941588  527485 command_runner.go:124] ! 2021-05-26 21:23:30.146935 I | etcdmain: Go OS/Arch: linux/amd64
	I0526 21:25:13.941957  527485 command_runner.go:124] ! 2021-05-26 21:23:30.147549 I | etcdmain: setting maximum number of CPUs to 2, total number of available CPUs is 2
	I0526 21:25:13.942250  527485 command_runner.go:124] ! [WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead
	I0526 21:25:13.942621  527485 command_runner.go:124] ! 2021-05-26 21:23:30.148927 I | embed: peerTLS: cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	I0526 21:25:13.942715  527485 command_runner.go:124] ! 2021-05-26 21:23:30.159191 I | embed: name = multinode-20210526212238-510955
	I0526 21:25:13.943139  527485 command_runner.go:124] ! 2021-05-26 21:23:30.159781 I | embed: data dir = /var/lib/minikube/etcd
	I0526 21:25:13.943303  527485 command_runner.go:124] ! 2021-05-26 21:23:30.161368 I | embed: member dir = /var/lib/minikube/etcd/member
	I0526 21:25:13.943549  527485 command_runner.go:124] ! 2021-05-26 21:23:30.161781 I | embed: heartbeat = 100ms
	I0526 21:25:13.943727  527485 command_runner.go:124] ! 2021-05-26 21:23:30.162024 I | embed: election = 1000ms
	I0526 21:25:13.944038  527485 command_runner.go:124] ! 2021-05-26 21:23:30.162419 I | embed: snapshot count = 10000
	I0526 21:25:13.944440  527485 command_runner.go:124] ! 2021-05-26 21:23:30.162834 I | embed: advertise client URLs = https://192.168.39.229:2379
	I0526 21:25:13.944675  527485 command_runner.go:124] ! 2021-05-26 21:23:30.186657 I | etcdserver: starting member b8647f2870156d71 in cluster 2bfbf13ce68722b
	I0526 21:25:13.944752  527485 command_runner.go:124] ! raft2021/05/26 21:23:30 INFO: b8647f2870156d71 switched to configuration voters=()
	I0526 21:25:13.945145  527485 command_runner.go:124] ! raft2021/05/26 21:23:30 INFO: b8647f2870156d71 became follower at term 0
	I0526 21:25:13.945357  527485 command_runner.go:124] ! raft2021/05/26 21:23:30 INFO: newRaft b8647f2870156d71 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	I0526 21:25:13.945550  527485 command_runner.go:124] ! raft2021/05/26 21:23:30 INFO: b8647f2870156d71 became follower at term 1
	I0526 21:25:13.945736  527485 command_runner.go:124] ! raft2021/05/26 21:23:30 INFO: b8647f2870156d71 switched to configuration voters=(13286884612305677681)
	I0526 21:25:13.946210  527485 command_runner.go:124] ! 2021-05-26 21:23:30.205555 W | auth: simple token is not cryptographically signed
	I0526 21:25:13.946249  527485 command_runner.go:124] ! 2021-05-26 21:23:30.234208 I | etcdserver: starting server... [version: 3.4.13, cluster version: to_be_decided]
	I0526 21:25:13.946385  527485 command_runner.go:124] ! 2021-05-26 21:23:30.243414 I | etcdserver: b8647f2870156d71 as single-node; fast-forwarding 9 ticks (election ticks 10)
	I0526 21:25:13.946493  527485 command_runner.go:124] ! raft2021/05/26 21:23:30 INFO: b8647f2870156d71 switched to configuration voters=(13286884612305677681)
	I0526 21:25:13.946578  527485 command_runner.go:124] ! 2021-05-26 21:23:30.255082 I | etcdserver/membership: added member b8647f2870156d71 [https://192.168.39.229:2380] to cluster 2bfbf13ce68722b
	I0526 21:25:13.946860  527485 command_runner.go:124] ! 2021-05-26 21:23:30.261097 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	I0526 21:25:13.946923  527485 command_runner.go:124] ! 2021-05-26 21:23:30.264526 I | embed: listening for peers on 192.168.39.229:2380
	I0526 21:25:13.947147  527485 command_runner.go:124] ! 2021-05-26 21:23:30.264701 I | embed: listening for metrics on http://127.0.0.1:2381
	I0526 21:25:13.947519  527485 command_runner.go:124] ! raft2021/05/26 21:23:30 INFO: b8647f2870156d71 is starting a new election at term 1
	I0526 21:25:13.947533  527485 command_runner.go:124] ! raft2021/05/26 21:23:30 INFO: b8647f2870156d71 became candidate at term 2
	I0526 21:25:13.947545  527485 command_runner.go:124] ! raft2021/05/26 21:23:30 INFO: b8647f2870156d71 received MsgVoteResp from b8647f2870156d71 at term 2
	I0526 21:25:13.947555  527485 command_runner.go:124] ! raft2021/05/26 21:23:30 INFO: b8647f2870156d71 became leader at term 2
	I0526 21:25:13.947573  527485 command_runner.go:124] ! raft2021/05/26 21:23:30 INFO: raft.node: b8647f2870156d71 elected leader b8647f2870156d71 at term 2
	I0526 21:25:13.947585  527485 command_runner.go:124] ! 2021-05-26 21:23:30.893688 I | etcdserver: setting up the initial cluster version to 3.4
	I0526 21:25:13.947598  527485 command_runner.go:124] ! 2021-05-26 21:23:30.897562 I | embed: ready to serve client requests
	I0526 21:25:13.947615  527485 command_runner.go:124] ! 2021-05-26 21:23:30.897893 I | etcdserver: published {Name:multinode-20210526212238-510955 ClientURLs:[https://192.168.39.229:2379]} to cluster 2bfbf13ce68722b
	I0526 21:25:13.947629  527485 command_runner.go:124] ! 2021-05-26 21:23:30.898097 I | embed: ready to serve client requests
	I0526 21:25:13.947641  527485 command_runner.go:124] ! 2021-05-26 21:23:30.904911 I | embed: serving client requests on 127.0.0.1:2379
	I0526 21:25:13.947653  527485 command_runner.go:124] ! 2021-05-26 21:23:30.925406 I | embed: serving client requests on 192.168.39.229:2379
	I0526 21:25:13.947666  527485 command_runner.go:124] ! 2021-05-26 21:23:30.930764 N | etcdserver/membership: set the initial cluster version to 3.4
	I0526 21:25:13.947680  527485 command_runner.go:124] ! 2021-05-26 21:23:30.973015 I | etcdserver/api: enabled capabilities for version 3.4
	I0526 21:25:13.947701  527485 command_runner.go:124] ! 2021-05-26 21:23:35.005110 W | etcdserver: read-only range request "key:\"/registry/ranges/servicenodeports\" " with result "range_response_count:0 size:4" took too long (158.136927ms) to execute
	I0526 21:25:13.947727  527485 command_runner.go:124] ! 2021-05-26 21:23:35.008540 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/etcd-multinode-20210526212238-510955\" " with result "range_response_count:0 size:4" took too long (159.3133ms) to execute
	I0526 21:25:13.947748  527485 command_runner.go:124] ! 2021-05-26 21:23:35.012635 W | etcdserver: read-only range request "key:\"/registry/namespaces/kube-system\" " with result "range_response_count:0 size:4" took too long (107.936302ms) to execute
	I0526 21:25:13.947773  527485 command_runner.go:124] ! 2021-05-26 21:23:35.013064 W | etcdserver: read-only range request "key:\"/registry/csinodes/multinode-20210526212238-510955\" " with result "range_response_count:0 size:4" took too long (148.811077ms) to execute
	I0526 21:25:13.947795  527485 command_runner.go:124] ! 2021-05-26 21:23:35.013577 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:4" took too long (157.477156ms) to execute
	I0526 21:25:13.947811  527485 command_runner.go:124] ! 2021-05-26 21:23:48.034379 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	I0526 21:25:13.947829  527485 command_runner.go:124] ! 2021-05-26 21:23:50.916831 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	I0526 21:25:13.947841  527485 command_runner.go:124] ! 2021-05-26 21:24:00.917857 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	I0526 21:25:13.947853  527485 command_runner.go:124] ! 2021-05-26 21:24:10.918220 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	I0526 21:25:13.947867  527485 command_runner.go:124] ! 2021-05-26 21:24:20.917896 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	I0526 21:25:13.947880  527485 command_runner.go:124] ! 2021-05-26 21:24:30.916918 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	I0526 21:25:13.947894  527485 command_runner.go:124] ! 2021-05-26 21:24:40.917190 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	I0526 21:25:13.947907  527485 command_runner.go:124] ! 2021-05-26 21:24:50.917237 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	I0526 21:25:13.947921  527485 command_runner.go:124] ! 2021-05-26 21:25:00.916673 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	I0526 21:25:13.947934  527485 command_runner.go:124] ! 2021-05-26 21:25:10.921256 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	I0526 21:25:13.951431  527485 logs.go:123] Gathering logs for storage-provisioner [5d3df8c94eaedafbc7bb1e79012cbc72bc2ec88687f4bfc6f10fff8c1eb3107d] ...
	I0526 21:25:13.951452  527485 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 5d3df8c94eaedafbc7bb1e79012cbc72bc2ec88687f4bfc6f10fff8c1eb3107d"
	I0526 21:25:13.971535  527485 command_runner.go:124] ! I0526 21:24:10.174152       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0526 21:25:13.971977  527485 command_runner.go:124] ! I0526 21:24:10.283423       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0526 21:25:13.972343  527485 command_runner.go:124] ! I0526 21:24:10.285296       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0526 21:25:13.972396  527485 command_runner.go:124] ! I0526 21:24:10.325709       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0526 21:25:13.972669  527485 command_runner.go:124] ! I0526 21:24:10.333080       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_multinode-20210526212238-510955_640f1575-3f2b-423b-9f51-48a3198dc1b4!
	I0526 21:25:13.972740  527485 command_runner.go:124] ! I0526 21:24:10.329407       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"694e5be2-46cf-4c76-aeac-70628468e6a3", APIVersion:"v1", ResourceVersion:"496", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' multinode-20210526212238-510955_640f1575-3f2b-423b-9f51-48a3198dc1b4 became leader
	I0526 21:25:13.973145  527485 command_runner.go:124] ! I0526 21:24:10.440994       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_multinode-20210526212238-510955_640f1575-3f2b-423b-9f51-48a3198dc1b4!
	I0526 21:25:13.974698  527485 logs.go:123] Gathering logs for container status ...
	I0526 21:25:13.974714  527485 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0526 21:25:14.003527  527485 command_runner.go:124] > CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	I0526 21:25:14.003548  527485 command_runner.go:124] > a9593dff4428d       bfe3a36ebd252       About a minute ago   Running             coredns                   0                   1d96eb581f035
	I0526 21:25:14.003558  527485 command_runner.go:124] > 5d3df8c94eaed       6e38f40d628db       About a minute ago   Running             storage-provisioner       0                   722b1b257c571
	I0526 21:25:14.003570  527485 command_runner.go:124] > 69df1859ce4d1       6de166512aa22       About a minute ago   Running             kindnet-cni               0                   53490c652b9e5
	I0526 21:25:14.003582  527485 command_runner.go:124] > de6efc6fec4b2       43154ddb57a83       About a minute ago   Running             kube-proxy                0                   038c42970362d
	I0526 21:25:14.003592  527485 command_runner.go:124] > c8538106e966b       0369cf4303ffd       About a minute ago   Running             etcd                      0                   2ad404c6a9c44
	I0526 21:25:14.003604  527485 command_runner.go:124] > e6bb9bee7539a       ed2c44fbdd78b       About a minute ago   Running             kube-scheduler            0                   24fd8b8599a6e
	I0526 21:25:14.003618  527485 command_runner.go:124] > 2314e41b1b443       a27166429d98e       About a minute ago   Running             kube-controller-manager   0                   73ada73fbbf0b
	I0526 21:25:14.003630  527485 command_runner.go:124] > a0581c0e5409b       a8c2fdb8bf76e       About a minute ago   Running             kube-apiserver            0                   fe43674906f20
	I0526 21:25:14.004139  527485 logs.go:123] Gathering logs for dmesg ...
	I0526 21:25:14.004150  527485 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0526 21:25:14.014722  527485 command_runner.go:124] > [May26 21:22] You have booted with nomodeset. This means your GPU drivers are DISABLED
	I0526 21:25:14.014739  527485 command_runner.go:124] > [  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	I0526 21:25:14.014755  527485 command_runner.go:124] > [  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	I0526 21:25:14.014769  527485 command_runner.go:124] > [  +0.092301] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	I0526 21:25:14.014781  527485 command_runner.go:124] > [  +3.726361] Unstable clock detected, switching default tracing clock to "global"
	I0526 21:25:14.014792  527485 command_runner.go:124] >               If you want to keep using the local clock, then add:
	I0526 21:25:14.014797  527485 command_runner.go:124] >                 "trace_clock=local"
	I0526 21:25:14.014803  527485 command_runner.go:124] >               on the kernel command line
	I0526 21:25:14.014811  527485 command_runner.go:124] > [  +0.000018] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	I0526 21:25:14.014819  527485 command_runner.go:124] > [  +3.393840] systemd-fstab-generator[1161]: Ignoring "noauto" for root device
	I0526 21:25:14.014829  527485 command_runner.go:124] > [  +0.034647] systemd[1]: system-getty.slice: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	I0526 21:25:14.014840  527485 command_runner.go:124] > [  +0.000003] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	I0526 21:25:14.014856  527485 command_runner.go:124] > [  +0.775022] SELinux: unrecognized netlink message: protocol=0 nlmsg_type=106 sclass=netlink_route_socket pid=1723 comm=systemd-network
	I0526 21:25:14.014871  527485 command_runner.go:124] > [  +1.684954] vboxguest: loading out-of-tree module taints kernel.
	I0526 21:25:14.014883  527485 command_runner.go:124] > [  +0.006011] vboxguest: PCI device not found, probably running on physical hardware.
	I0526 21:25:14.014898  527485 command_runner.go:124] > [  +1.532510] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack.
	I0526 21:25:14.014908  527485 command_runner.go:124] > [May26 21:23] systemd-fstab-generator[2097]: Ignoring "noauto" for root device
	I0526 21:25:14.014914  527485 command_runner.go:124] > [  +0.282151] systemd-fstab-generator[2145]: Ignoring "noauto" for root device
	I0526 21:25:14.014924  527485 command_runner.go:124] > [  +9.202259] systemd-fstab-generator[2335]: Ignoring "noauto" for root device
	I0526 21:25:14.014930  527485 command_runner.go:124] > [ +16.373129] systemd-fstab-generator[2754]: Ignoring "noauto" for root device
	I0526 21:25:14.014938  527485 command_runner.go:124] > [ +16.598445] kauditd_printk_skb: 38 callbacks suppressed
	I0526 21:25:14.014944  527485 command_runner.go:124] > [May26 21:24] kauditd_printk_skb: 50 callbacks suppressed
	I0526 21:25:14.014957  527485 command_runner.go:124] > [ +45.152218] NFSD: Unable to end grace period: -110
	I0526 21:25:14.015833  527485 logs.go:123] Gathering logs for kube-apiserver [a0581c0e5409bf38a71c61d4cf776a7fcd3dc38c24a5a6a52165552afd2bc85c] ...
	I0526 21:25:14.015845  527485 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 a0581c0e5409bf38a71c61d4cf776a7fcd3dc38c24a5a6a52165552afd2bc85c"
	I0526 21:25:14.048160  527485 command_runner.go:124] ! Flag --insecure-port has been deprecated, This flag has no effect now and will be removed in v1.24.
	I0526 21:25:14.048183  527485 command_runner.go:124] ! I0526 21:23:29.805604       1 server.go:632] external host was not specified, using 192.168.39.229
	I0526 21:25:14.048193  527485 command_runner.go:124] ! I0526 21:23:29.806982       1 server.go:182] Version: v1.20.2
	I0526 21:25:14.048205  527485 command_runner.go:124] ! I0526 21:23:30.593640       1 shared_informer.go:240] Waiting for caches to sync for node_authorizer
	I0526 21:25:14.048237  527485 command_runner.go:124] ! I0526 21:23:30.598821       1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0526 21:25:14.048267  527485 command_runner.go:124] ! I0526 21:23:30.598945       1 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
	I0526 21:25:14.048310  527485 command_runner.go:124] ! I0526 21:23:30.600954       1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0526 21:25:14.048341  527485 command_runner.go:124] ! I0526 21:23:30.601309       1 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
	I0526 21:25:14.048356  527485 command_runner.go:124] ! I0526 21:23:30.616590       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.048373  527485 command_runner.go:124] ! I0526 21:23:30.617065       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.048386  527485 command_runner.go:124] ! I0526 21:23:30.995013       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.048406  527485 command_runner.go:124] ! I0526 21:23:30.995139       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.048420  527485 command_runner.go:124] ! I0526 21:23:31.030659       1 client.go:360] parsed scheme: "passthrough"
	I0526 21:25:14.048436  527485 command_runner.go:124] ! I0526 21:23:31.031231       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0526 21:25:14.048450  527485 command_runner.go:124] ! I0526 21:23:31.031324       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0526 21:25:14.048461  527485 command_runner.go:124] ! I0526 21:23:31.032369       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.048478  527485 command_runner.go:124] ! I0526 21:23:31.032725       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.048491  527485 command_runner.go:124] ! I0526 21:23:31.143094       1 instance.go:289] Using reconciler: lease
	I0526 21:25:14.048501  527485 command_runner.go:124] ! I0526 21:23:31.148814       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.048516  527485 command_runner.go:124] ! I0526 21:23:31.148936       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.048528  527485 command_runner.go:124] ! I0526 21:23:31.164327       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.048542  527485 command_runner.go:124] ! I0526 21:23:31.164627       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.048555  527485 command_runner.go:124] ! I0526 21:23:31.183831       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.048570  527485 command_runner.go:124] ! I0526 21:23:31.184185       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.048581  527485 command_runner.go:124] ! I0526 21:23:31.203621       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.048598  527485 command_runner.go:124] ! I0526 21:23:31.204140       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.048608  527485 command_runner.go:124] ! I0526 21:23:31.218608       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.048626  527485 command_runner.go:124] ! I0526 21:23:31.218929       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.048636  527485 command_runner.go:124] ! I0526 21:23:31.235670       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.048651  527485 command_runner.go:124] ! I0526 21:23:31.235780       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.048663  527485 command_runner.go:124] ! I0526 21:23:31.248767       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.048682  527485 command_runner.go:124] ! I0526 21:23:31.248973       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.048700  527485 command_runner.go:124] ! I0526 21:23:31.270717       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.048717  527485 command_runner.go:124] ! I0526 21:23:31.272045       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.048727  527485 command_runner.go:124] ! I0526 21:23:31.287807       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.048741  527485 command_runner.go:124] ! I0526 21:23:31.288158       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.048755  527485 command_runner.go:124] ! I0526 21:23:31.302175       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.048772  527485 command_runner.go:124] ! I0526 21:23:31.302294       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.048783  527485 command_runner.go:124] ! I0526 21:23:31.318788       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.048798  527485 command_runner.go:124] ! I0526 21:23:31.318898       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.048810  527485 command_runner.go:124] ! I0526 21:23:31.340681       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.048825  527485 command_runner.go:124] ! I0526 21:23:31.341103       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.048838  527485 command_runner.go:124] ! I0526 21:23:31.364875       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.048852  527485 command_runner.go:124] ! I0526 21:23:31.365260       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.048878  527485 command_runner.go:124] ! I0526 21:23:31.375229       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.048896  527485 command_runner.go:124] ! I0526 21:23:31.375353       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.048908  527485 command_runner.go:124] ! I0526 21:23:31.384385       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.048922  527485 command_runner.go:124] ! I0526 21:23:31.384585       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.048934  527485 command_runner.go:124] ! I0526 21:23:31.392770       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.048950  527485 command_runner.go:124] ! I0526 21:23:31.392939       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.048961  527485 command_runner.go:124] ! I0526 21:23:31.406398       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.048976  527485 command_runner.go:124] ! I0526 21:23:31.406589       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.048986  527485 command_runner.go:124] ! I0526 21:23:31.421828       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.049000  527485 command_runner.go:124] ! I0526 21:23:31.422392       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.049015  527485 command_runner.go:124] ! I0526 21:23:31.434772       1 rest.go:131] the default service ipfamily for this cluster is: IPv4
	I0526 21:25:14.049026  527485 command_runner.go:124] ! I0526 21:23:31.530123       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.049042  527485 command_runner.go:124] ! I0526 21:23:31.530234       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.049053  527485 command_runner.go:124] ! I0526 21:23:31.542917       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.049069  527485 command_runner.go:124] ! I0526 21:23:31.543258       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.049078  527485 command_runner.go:124] ! I0526 21:23:31.558871       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.049092  527485 command_runner.go:124] ! I0526 21:23:31.558975       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.049105  527485 command_runner.go:124] ! I0526 21:23:31.578311       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.049121  527485 command_runner.go:124] ! I0526 21:23:31.578428       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.049132  527485 command_runner.go:124] ! I0526 21:23:31.579212       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.049147  527485 command_runner.go:124] ! I0526 21:23:31.579406       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.049158  527485 command_runner.go:124] ! I0526 21:23:31.593279       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.049183  527485 command_runner.go:124] ! I0526 21:23:31.593392       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.049196  527485 command_runner.go:124] ! I0526 21:23:31.609260       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.049211  527485 command_runner.go:124] ! I0526 21:23:31.609368       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.049221  527485 command_runner.go:124] ! I0526 21:23:31.626851       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.049236  527485 command_runner.go:124] ! I0526 21:23:31.626960       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.049251  527485 command_runner.go:124] ! I0526 21:23:31.653023       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.049286  527485 command_runner.go:124] ! I0526 21:23:31.653138       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.049299  527485 command_runner.go:124] ! I0526 21:23:31.662951       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.049315  527485 command_runner.go:124] ! I0526 21:23:31.663349       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.049326  527485 command_runner.go:124] ! I0526 21:23:31.683106       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.049340  527485 command_runner.go:124] ! I0526 21:23:31.684613       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.049350  527485 command_runner.go:124] ! I0526 21:23:31.700741       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.049364  527485 command_runner.go:124] ! I0526 21:23:31.701266       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.049377  527485 command_runner.go:124] ! I0526 21:23:31.722045       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.049392  527485 command_runner.go:124] ! I0526 21:23:31.722235       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.049405  527485 command_runner.go:124] ! I0526 21:23:31.736295       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.049420  527485 command_runner.go:124] ! I0526 21:23:31.737071       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.049431  527485 command_runner.go:124] ! I0526 21:23:31.751086       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.049445  527485 command_runner.go:124] ! I0526 21:23:31.751202       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.049455  527485 command_runner.go:124] ! I0526 21:23:31.767941       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.049473  527485 command_runner.go:124] ! I0526 21:23:31.768045       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.049487  527485 command_runner.go:124] ! I0526 21:23:31.784917       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.049503  527485 command_runner.go:124] ! I0526 21:23:31.785029       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.049513  527485 command_runner.go:124] ! I0526 21:23:31.802204       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.049527  527485 command_runner.go:124] ! I0526 21:23:31.802314       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.049540  527485 command_runner.go:124] ! I0526 21:23:31.817427       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.049555  527485 command_runner.go:124] ! I0526 21:23:31.817616       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.049567  527485 command_runner.go:124] ! I0526 21:23:31.837841       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.049582  527485 command_runner.go:124] ! I0526 21:23:31.837939       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.049593  527485 command_runner.go:124] ! I0526 21:23:31.860217       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.049608  527485 command_runner.go:124] ! I0526 21:23:31.861221       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.049620  527485 command_runner.go:124] ! I0526 21:23:31.871254       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.049635  527485 command_runner.go:124] ! I0526 21:23:31.872836       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.049645  527485 command_runner.go:124] ! I0526 21:23:31.884052       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.049662  527485 command_runner.go:124] ! I0526 21:23:31.884160       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.049675  527485 command_runner.go:124] ! I0526 21:23:31.898818       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.049690  527485 command_runner.go:124] ! I0526 21:23:31.898925       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.049701  527485 command_runner.go:124] ! I0526 21:23:31.913046       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.049715  527485 command_runner.go:124] ! I0526 21:23:31.913149       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.049730  527485 command_runner.go:124] ! I0526 21:23:31.925884       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.049745  527485 command_runner.go:124] ! I0526 21:23:31.925994       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.049758  527485 command_runner.go:124] ! I0526 21:23:31.939143       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.049772  527485 command_runner.go:124] ! I0526 21:23:31.939253       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.049782  527485 command_runner.go:124] ! I0526 21:23:31.954393       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.049799  527485 command_runner.go:124] ! I0526 21:23:31.956005       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.049812  527485 command_runner.go:124] ! I0526 21:23:31.964255       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.049828  527485 command_runner.go:124] ! I0526 21:23:31.964369       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.049839  527485 command_runner.go:124] ! I0526 21:23:31.980824       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.049853  527485 command_runner.go:124] ! I0526 21:23:31.980931       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.049866  527485 command_runner.go:124] ! I0526 21:23:31.998875       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.049881  527485 command_runner.go:124] ! I0526 21:23:31.998978       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.049891  527485 command_runner.go:124] ! I0526 21:23:32.014057       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.049905  527485 command_runner.go:124] ! I0526 21:23:32.014169       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.049918  527485 command_runner.go:124] ! I0526 21:23:32.027301       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.049934  527485 command_runner.go:124] ! I0526 21:23:32.027633       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.049944  527485 command_runner.go:124] ! I0526 21:23:32.046160       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.049959  527485 command_runner.go:124] ! I0526 21:23:32.046890       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.049969  527485 command_runner.go:124] ! I0526 21:23:32.068538       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.049984  527485 command_runner.go:124] ! I0526 21:23:32.069814       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.049997  527485 command_runner.go:124] ! I0526 21:23:32.087119       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.050013  527485 command_runner.go:124] ! I0526 21:23:32.087547       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.050024  527485 command_runner.go:124] ! I0526 21:23:32.097832       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.050039  527485 command_runner.go:124] ! I0526 21:23:32.097940       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.050050  527485 command_runner.go:124] ! I0526 21:23:32.107249       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.050067  527485 command_runner.go:124] ! I0526 21:23:32.107932       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.050077  527485 command_runner.go:124] ! I0526 21:23:32.119796       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.050092  527485 command_runner.go:124] ! I0526 21:23:32.119897       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.050104  527485 command_runner.go:124] ! I0526 21:23:32.128209       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.050120  527485 command_runner.go:124] ! I0526 21:23:32.128321       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.050130  527485 command_runner.go:124] ! I0526 21:23:32.138008       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.050145  527485 command_runner.go:124] ! I0526 21:23:32.138111       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.050155  527485 command_runner.go:124] ! I0526 21:23:32.160727       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.050170  527485 command_runner.go:124] ! I0526 21:23:32.160833       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.050183  527485 command_runner.go:124] ! I0526 21:23:32.186843       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.050197  527485 command_runner.go:124] ! I0526 21:23:32.186949       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.050208  527485 command_runner.go:124] ! I0526 21:23:32.198121       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.050222  527485 command_runner.go:124] ! I0526 21:23:32.198232       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.050235  527485 command_runner.go:124] ! I0526 21:23:32.206015       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.050252  527485 command_runner.go:124] ! I0526 21:23:32.206127       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.050266  527485 command_runner.go:124] ! I0526 21:23:32.222761       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.050286  527485 command_runner.go:124] ! I0526 21:23:32.223204       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.050299  527485 command_runner.go:124] ! I0526 21:23:32.232528       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.050313  527485 command_runner.go:124] ! I0526 21:23:32.232629       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.050325  527485 command_runner.go:124] ! I0526 21:23:32.245897       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.050341  527485 command_runner.go:124] ! I0526 21:23:32.246007       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.050354  527485 command_runner.go:124] ! I0526 21:23:32.263847       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.050422  527485 command_runner.go:124] ! I0526 21:23:32.263950       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.050436  527485 command_runner.go:124] ! I0526 21:23:32.275996       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.050451  527485 command_runner.go:124] ! I0526 21:23:32.276100       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.050462  527485 command_runner.go:124] ! I0526 21:23:32.286992       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.050479  527485 command_runner.go:124] ! I0526 21:23:32.288760       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.050492  527485 command_runner.go:124] ! I0526 21:23:32.300558       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.050508  527485 command_runner.go:124] ! I0526 21:23:32.300656       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.050521  527485 command_runner.go:124] ! W0526 21:23:32.466350       1 genericapiserver.go:419] Skipping API batch/v2alpha1 because it has no resources.
	I0526 21:25:14.050535  527485 command_runner.go:124] ! W0526 21:23:32.475974       1 genericapiserver.go:419] Skipping API discovery.k8s.io/v1alpha1 because it has no resources.
	I0526 21:25:14.050549  527485 command_runner.go:124] ! W0526 21:23:32.486620       1 genericapiserver.go:419] Skipping API node.k8s.io/v1alpha1 because it has no resources.
	I0526 21:25:14.050565  527485 command_runner.go:124] ! W0526 21:23:32.495038       1 genericapiserver.go:419] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
	I0526 21:25:14.050580  527485 command_runner.go:124] ! W0526 21:23:32.498634       1 genericapiserver.go:419] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
	I0526 21:25:14.050594  527485 command_runner.go:124] ! W0526 21:23:32.503834       1 genericapiserver.go:419] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
	I0526 21:25:14.050612  527485 command_runner.go:124] ! W0526 21:23:32.506839       1 genericapiserver.go:419] Skipping API flowcontrol.apiserver.k8s.io/v1alpha1 because it has no resources.
	I0526 21:25:14.050626  527485 command_runner.go:124] ! W0526 21:23:32.511920       1 genericapiserver.go:419] Skipping API apps/v1beta2 because it has no resources.
	I0526 21:25:14.050639  527485 command_runner.go:124] ! W0526 21:23:32.512155       1 genericapiserver.go:419] Skipping API apps/v1beta1 because it has no resources.
	I0526 21:25:14.050671  527485 command_runner.go:124] ! I0526 21:23:32.520325       1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0526 21:25:14.050702  527485 command_runner.go:124] ! I0526 21:23:32.520699       1 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
	I0526 21:25:14.050717  527485 command_runner.go:124] ! I0526 21:23:32.522294       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.050733  527485 command_runner.go:124] ! I0526 21:23:32.522675       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.050747  527485 command_runner.go:124] ! I0526 21:23:32.531035       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.050762  527485 command_runner.go:124] ! I0526 21:23:32.531144       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.050778  527485 command_runner.go:124] ! I0526 21:23:34.690784       1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
	I0526 21:25:14.050792  527485 command_runner.go:124] ! I0526 21:23:34.691285       1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	I0526 21:25:14.050809  527485 command_runner.go:124] ! I0526 21:23:34.692130       1 dynamic_serving_content.go:130] Starting serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key
	I0526 21:25:14.050822  527485 command_runner.go:124] ! I0526 21:23:34.692740       1 secure_serving.go:197] Serving securely on [::]:8443
	I0526 21:25:14.050835  527485 command_runner.go:124] ! I0526 21:23:34.693343       1 apf_controller.go:261] Starting API Priority and Fairness config controller
	I0526 21:25:14.050847  527485 command_runner.go:124] ! I0526 21:23:34.693677       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I0526 21:25:14.050864  527485 command_runner.go:124] ! I0526 21:23:34.694744       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
	I0526 21:25:14.050881  527485 command_runner.go:124] ! I0526 21:23:34.694836       1 shared_informer.go:240] Waiting for caches to sync for cluster_authentication_trust_controller
	I0526 21:25:14.050899  527485 command_runner.go:124] ! I0526 21:23:34.694880       1 available_controller.go:475] Starting AvailableConditionController
	I0526 21:25:14.050916  527485 command_runner.go:124] ! I0526 21:23:34.694885       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
	I0526 21:25:14.050928  527485 command_runner.go:124] ! I0526 21:23:34.694904       1 autoregister_controller.go:141] Starting autoregister controller
	I0526 21:25:14.050941  527485 command_runner.go:124] ! I0526 21:23:34.694908       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0526 21:25:14.050953  527485 command_runner.go:124] ! I0526 21:23:34.696887       1 apiservice_controller.go:97] Starting APIServiceRegistrationController
	I0526 21:25:14.050967  527485 command_runner.go:124] ! I0526 21:23:34.697053       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I0526 21:25:14.050986  527485 command_runner.go:124] ! I0526 21:23:34.697670       1 dynamic_serving_content.go:130] Starting aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key
	I0526 21:25:14.051000  527485 command_runner.go:124] ! I0526 21:23:34.697935       1 controller.go:83] Starting OpenAPI AggregationController
	I0526 21:25:14.051013  527485 command_runner.go:124] ! I0526 21:23:34.698627       1 customresource_discovery_controller.go:209] Starting DiscoveryController
	I0526 21:25:14.051026  527485 command_runner.go:124] ! I0526 21:23:34.705120       1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	I0526 21:25:14.051042  527485 command_runner.go:124] ! I0526 21:23:34.705289       1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
	I0526 21:25:14.051057  527485 command_runner.go:124] ! I0526 21:23:34.706119       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0526 21:25:14.051068  527485 command_runner.go:124] ! I0526 21:23:34.706246       1 shared_informer.go:240] Waiting for caches to sync for crd-autoregister
	I0526 21:25:14.051089  527485 command_runner.go:124] ! E0526 21:23:34.733148       1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.39.229, ResourceVersion: 0, AdditionalErrorMsg: 
	I0526 21:25:14.051103  527485 command_runner.go:124] ! I0526 21:23:34.762565       1 controller.go:86] Starting OpenAPI controller
	I0526 21:25:14.051118  527485 command_runner.go:124] ! I0526 21:23:34.762983       1 naming_controller.go:291] Starting NamingConditionController
	I0526 21:25:14.051130  527485 command_runner.go:124] ! I0526 21:23:34.763230       1 establishing_controller.go:76] Starting EstablishingController
	I0526 21:25:14.051143  527485 command_runner.go:124] ! I0526 21:23:34.763815       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0526 21:25:14.051157  527485 command_runner.go:124] ! I0526 21:23:34.764676       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0526 21:25:14.051167  527485 command_runner.go:124] ! I0526 21:23:34.765003       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0526 21:25:14.051181  527485 command_runner.go:124] ! I0526 21:23:34.894833       1 shared_informer.go:247] Caches are synced for node_authorizer 
	I0526 21:25:14.051193  527485 command_runner.go:124] ! I0526 21:23:34.895159       1 cache.go:39] Caches are synced for autoregister controller
	I0526 21:25:14.051206  527485 command_runner.go:124] ! I0526 21:23:34.895543       1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller 
	I0526 21:25:14.051221  527485 command_runner.go:124] ! I0526 21:23:34.895893       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0526 21:25:14.051234  527485 command_runner.go:124] ! I0526 21:23:34.897085       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0526 21:25:14.051246  527485 command_runner.go:124] ! I0526 21:23:34.899871       1 apf_controller.go:266] Running API Priority and Fairness config worker
	I0526 21:25:14.051258  527485 command_runner.go:124] ! I0526 21:23:34.907242       1 shared_informer.go:247] Caches are synced for crd-autoregister 
	I0526 21:25:14.051272  527485 command_runner.go:124] ! I0526 21:23:35.022751       1 controller.go:609] quota admission added evaluator for: namespaces
	I0526 21:25:14.051292  527485 command_runner.go:124] ! I0526 21:23:35.690855       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0526 21:25:14.051314  527485 command_runner.go:124] ! I0526 21:23:35.691097       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0526 21:25:14.051332  527485 command_runner.go:124] ! I0526 21:23:35.708402       1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000
	I0526 21:25:14.051347  527485 command_runner.go:124] ! I0526 21:23:35.726885       1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
	I0526 21:25:14.051362  527485 command_runner.go:124] ! I0526 21:23:35.727088       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I0526 21:25:14.051379  527485 command_runner.go:124] ! I0526 21:23:36.334571       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0526 21:25:14.051396  527485 command_runner.go:124] ! I0526 21:23:36.389004       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0526 21:25:14.051410  527485 command_runner.go:124] ! W0526 21:23:36.485873       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.39.229]
	I0526 21:25:14.051426  527485 command_runner.go:124] ! I0526 21:23:36.487435       1 controller.go:609] quota admission added evaluator for: endpoints
	I0526 21:25:14.051441  527485 command_runner.go:124] ! I0526 21:23:36.499209       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0526 21:25:14.051454  527485 command_runner.go:124] ! I0526 21:23:37.294654       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I0526 21:25:14.051469  527485 command_runner.go:124] ! I0526 21:23:38.382157       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I0526 21:25:14.051484  527485 command_runner.go:124] ! I0526 21:23:38.454712       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I0526 21:25:14.051499  527485 command_runner.go:124] ! I0526 21:23:43.955877       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I0526 21:25:14.051512  527485 command_runner.go:124] ! I0526 21:23:53.285833       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I0526 21:25:14.051524  527485 command_runner.go:124] ! I0526 21:23:53.338274       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I0526 21:25:14.051539  527485 command_runner.go:124] ! I0526 21:24:01.973387       1 client.go:360] parsed scheme: "passthrough"
	I0526 21:25:14.051555  527485 command_runner.go:124] ! I0526 21:24:01.973608       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0526 21:25:14.051569  527485 command_runner.go:124] ! I0526 21:24:01.973627       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0526 21:25:14.051579  527485 command_runner.go:124] ! I0526 21:24:43.497572       1 client.go:360] parsed scheme: "passthrough"
	I0526 21:25:14.051595  527485 command_runner.go:124] ! I0526 21:24:43.497775       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0526 21:25:14.051608  527485 command_runner.go:124] ! I0526 21:24:43.498072       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0526 21:25:14.061823  527485 logs.go:123] Gathering logs for coredns [a9593dff4428d4d0f5cc31832cefb42bf39b62a98d32d295dd9502a1ef6d307a] ...
	I0526 21:25:14.061838  527485 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 a9593dff4428d4d0f5cc31832cefb42bf39b62a98d32d295dd9502a1ef6d307a"
	I0526 21:25:14.082593  527485 command_runner.go:124] > .:53
	I0526 21:25:14.082607  527485 command_runner.go:124] > [INFO] plugin/reload: Running configuration MD5 = 8f51b271a18f2ce6fcaee5f1cfda3ed0
	I0526 21:25:14.082611  527485 command_runner.go:124] > CoreDNS-1.7.0
	I0526 21:25:14.082617  527485 command_runner.go:124] > linux/amd64, go1.14.4, f59c03d
	I0526 21:25:14.082860  527485 logs.go:123] Gathering logs for kube-scheduler [e6bb9bee7539aa8f41eb665706dd077c17b8c28a54090cf4fffe439978194c08] ...
	I0526 21:25:14.082878  527485 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 e6bb9bee7539aa8f41eb665706dd077c17b8c28a54090cf4fffe439978194c08"
	I0526 21:25:14.120114  527485 command_runner.go:124] ! I0526 21:23:31.228401       1 serving.go:331] Generated self-signed cert in-memory
	I0526 21:25:14.121435  527485 command_runner.go:124] ! W0526 21:23:34.792981       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0526 21:25:14.121461  527485 command_runner.go:124] ! W0526 21:23:34.795544       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0526 21:25:14.121479  527485 command_runner.go:124] ! W0526 21:23:34.796410       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0526 21:25:14.121502  527485 command_runner.go:124] ! W0526 21:23:34.796897       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0526 21:25:14.121530  527485 command_runner.go:124] ! I0526 21:23:34.861412       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0526 21:25:14.121583  527485 command_runner.go:124] ! I0526 21:23:34.862415       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0526 21:25:14.121599  527485 command_runner.go:124] ! I0526 21:23:34.861578       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0526 21:25:14.121611  527485 command_runner.go:124] ! I0526 21:23:34.861594       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I0526 21:25:14.121647  527485 command_runner.go:124] ! E0526 21:23:34.865256       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0526 21:25:14.121686  527485 command_runner.go:124] ! E0526 21:23:34.871182       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0526 21:25:14.121715  527485 command_runner.go:124] ! E0526 21:23:34.871367       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0526 21:25:14.121745  527485 command_runner.go:124] ! E0526 21:23:34.871423       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0526 21:25:14.121778  527485 command_runner.go:124] ! E0526 21:23:34.873602       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0526 21:25:14.121806  527485 command_runner.go:124] ! E0526 21:23:34.873877       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0526 21:25:14.121855  527485 command_runner.go:124] ! E0526 21:23:34.874313       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0526 21:25:14.121887  527485 command_runner.go:124] ! E0526 21:23:34.874540       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0526 21:25:14.121920  527485 command_runner.go:124] ! E0526 21:23:34.875162       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0526 21:25:14.121947  527485 command_runner.go:124] ! E0526 21:23:34.875282       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0526 21:25:14.121978  527485 command_runner.go:124] ! E0526 21:23:34.878224       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0526 21:25:14.122004  527485 command_runner.go:124] ! E0526 21:23:34.878386       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0526 21:25:14.122032  527485 command_runner.go:124] ! E0526 21:23:35.699206       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0526 21:25:14.122059  527485 command_runner.go:124] ! E0526 21:23:35.756603       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0526 21:25:14.122089  527485 command_runner.go:124] ! E0526 21:23:35.804897       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0526 21:25:14.122120  527485 command_runner.go:124] ! E0526 21:23:35.812802       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0526 21:25:14.122153  527485 command_runner.go:124] ! E0526 21:23:35.981887       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0526 21:25:14.122183  527485 command_runner.go:124] ! E0526 21:23:36.079577       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0526 21:25:14.122202  527485 command_runner.go:124] ! I0526 21:23:38.862952       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	I0526 21:25:14.125846  527485 logs.go:123] Gathering logs for kube-proxy [de6efc6fec4b20903efb849817a9e57e284841a9fc6ef9bd1b83e563316cdaa2] ...
	I0526 21:25:14.125864  527485 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 de6efc6fec4b20903efb849817a9e57e284841a9fc6ef9bd1b83e563316cdaa2"
	I0526 21:25:14.145591  527485 command_runner.go:124] ! I0526 21:23:54.629702       1 node.go:172] Successfully retrieved node IP: 192.168.39.229
	I0526 21:25:14.145674  527485 command_runner.go:124] ! I0526 21:23:54.629813       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.39.229), assume IPv4 operation
	I0526 21:25:14.145982  527485 command_runner.go:124] ! W0526 21:23:54.677087       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0526 21:25:14.146070  527485 command_runner.go:124] ! I0526 21:23:54.677377       1 server_others.go:185] Using iptables Proxier.
	I0526 21:25:14.146618  527485 command_runner.go:124] ! I0526 21:23:54.678139       1 server.go:650] Version: v1.20.2
	I0526 21:25:14.147379  527485 command_runner.go:124] ! I0526 21:23:54.678560       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
	I0526 21:25:14.147445  527485 command_runner.go:124] ! I0526 21:23:54.678810       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
	I0526 21:25:14.147710  527485 command_runner.go:124] ! I0526 21:23:54.680271       1 config.go:315] Starting service config controller
	I0526 21:25:14.148076  527485 command_runner.go:124] ! I0526 21:23:54.680366       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0526 21:25:14.148153  527485 command_runner.go:124] ! I0526 21:23:54.680391       1 config.go:224] Starting endpoint slice config controller
	I0526 21:25:14.148364  527485 command_runner.go:124] ! I0526 21:23:54.680396       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0526 21:25:14.148436  527485 command_runner.go:124] ! I0526 21:23:54.780835       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0526 21:25:14.149102  527485 command_runner.go:124] ! I0526 21:23:54.780955       1 shared_informer.go:247] Caches are synced for service config 
	I0526 21:25:14.150528  527485 logs.go:123] Gathering logs for kube-controller-manager [2314e41b1b44395e7b5a7c32d6859941d4a5db601171f0dd18ddefb571d25f18] ...
	I0526 21:25:14.150542  527485 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 2314e41b1b44395e7b5a7c32d6859941d4a5db601171f0dd18ddefb571d25f18"
	I0526 21:25:14.176605  527485 command_runner.go:124] ! Flag --port has been deprecated, see --secure-port instead.
	I0526 21:25:14.176621  527485 command_runner.go:124] ! I0526 21:23:30.770698       1 serving.go:331] Generated self-signed cert in-memory
	I0526 21:25:14.176628  527485 command_runner.go:124] ! I0526 21:23:31.105740       1 controllermanager.go:176] Version: v1.20.2
	I0526 21:25:14.176641  527485 command_runner.go:124] ! I0526 21:23:31.110528       1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
	I0526 21:25:14.176653  527485 command_runner.go:124] ! I0526 21:23:31.110685       1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	I0526 21:25:14.176665  527485 command_runner.go:124] ! I0526 21:23:31.111406       1 secure_serving.go:197] Serving securely on 127.0.0.1:10257
	I0526 21:25:14.176681  527485 command_runner.go:124] ! I0526 21:23:31.111685       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I0526 21:25:14.176689  527485 command_runner.go:124] ! I0526 21:23:37.283320       1 shared_informer.go:240] Waiting for caches to sync for tokens
	I0526 21:25:14.176697  527485 command_runner.go:124] ! I0526 21:23:37.384858       1 shared_informer.go:247] Caches are synced for tokens 
	I0526 21:25:14.176704  527485 command_runner.go:124] ! I0526 21:23:37.398260       1 controllermanager.go:554] Started "csrcleaner"
	I0526 21:25:14.176712  527485 command_runner.go:124] ! I0526 21:23:37.398681       1 cleaner.go:82] Starting CSR cleaner controller
	I0526 21:25:14.176719  527485 command_runner.go:124] ! I0526 21:23:37.436326       1 controllermanager.go:554] Started "tokencleaner"
	I0526 21:25:14.176727  527485 command_runner.go:124] ! I0526 21:23:37.436948       1 tokencleaner.go:118] Starting token cleaner controller
	I0526 21:25:14.176735  527485 command_runner.go:124] ! I0526 21:23:37.437051       1 shared_informer.go:240] Waiting for caches to sync for token_cleaner
	I0526 21:25:14.176749  527485 command_runner.go:124] ! I0526 21:23:37.437060       1 shared_informer.go:247] Caches are synced for token_cleaner 
	I0526 21:25:14.176768  527485 command_runner.go:124] ! E0526 21:23:37.458692       1 core.go:92] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail
	I0526 21:25:14.176782  527485 command_runner.go:124] ! W0526 21:23:37.458788       1 controllermanager.go:546] Skipping "service"
	I0526 21:25:14.176795  527485 command_runner.go:124] ! I0526 21:23:37.485897       1 controllermanager.go:554] Started "root-ca-cert-publisher"
	I0526 21:25:14.176807  527485 command_runner.go:124] ! W0526 21:23:37.486148       1 controllermanager.go:546] Skipping "ephemeral-volume"
	I0526 21:25:14.176825  527485 command_runner.go:124] ! I0526 21:23:37.486971       1 publisher.go:98] Starting root CA certificate configmap publisher
	I0526 21:25:14.176840  527485 command_runner.go:124] ! I0526 21:23:37.487325       1 shared_informer.go:240] Waiting for caches to sync for crt configmap
	I0526 21:25:14.176854  527485 command_runner.go:124] ! I0526 21:23:37.514186       1 controllermanager.go:554] Started "endpointslicemirroring"
	I0526 21:25:14.176885  527485 command_runner.go:124] ! I0526 21:23:37.515190       1 endpointslicemirroring_controller.go:211] Starting EndpointSliceMirroring controller
	I0526 21:25:14.176898  527485 command_runner.go:124] ! I0526 21:23:37.515570       1 shared_informer.go:240] Waiting for caches to sync for endpoint_slice_mirroring
	I0526 21:25:14.176905  527485 command_runner.go:124] ! I0526 21:23:37.550580       1 controllermanager.go:554] Started "replicaset"
	I0526 21:25:14.176913  527485 command_runner.go:124] ! I0526 21:23:37.551218       1 replica_set.go:182] Starting replicaset controller
	I0526 21:25:14.176925  527485 command_runner.go:124] ! I0526 21:23:37.551414       1 shared_informer.go:240] Waiting for caches to sync for ReplicaSet
	I0526 21:25:14.176937  527485 command_runner.go:124] ! I0526 21:23:37.987267       1 controllermanager.go:554] Started "horizontalpodautoscaling"
	I0526 21:25:14.176950  527485 command_runner.go:124] ! I0526 21:23:37.988181       1 horizontal.go:169] Starting HPA controller
	I0526 21:25:14.176961  527485 command_runner.go:124] ! I0526 21:23:37.988418       1 shared_informer.go:240] Waiting for caches to sync for HPA
	I0526 21:25:14.176976  527485 command_runner.go:124] ! I0526 21:23:38.238507       1 controllermanager.go:554] Started "persistentvolume-binder"
	I0526 21:25:14.176990  527485 command_runner.go:124] ! I0526 21:23:38.238941       1 pv_controller_base.go:307] Starting persistent volume controller
	I0526 21:25:14.177004  527485 command_runner.go:124] ! I0526 21:23:38.238953       1 shared_informer.go:240] Waiting for caches to sync for persistent volume
	I0526 21:25:14.177013  527485 command_runner.go:124] ! I0526 21:23:38.636899       1 controllermanager.go:554] Started "garbagecollector"
	I0526 21:25:14.177021  527485 command_runner.go:124] ! I0526 21:23:38.636902       1 garbagecollector.go:142] Starting garbage collector controller
	I0526 21:25:14.177034  527485 command_runner.go:124] ! I0526 21:23:38.636960       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	I0526 21:25:14.177047  527485 command_runner.go:124] ! I0526 21:23:38.637525       1 graph_builder.go:289] GraphBuilder running
	I0526 21:25:14.177057  527485 command_runner.go:124] ! I0526 21:23:39.037283       1 controllermanager.go:554] Started "disruption"
	I0526 21:25:14.177071  527485 command_runner.go:124] ! I0526 21:23:39.037574       1 disruption.go:331] Starting disruption controller
	I0526 21:25:14.177083  527485 command_runner.go:124] ! I0526 21:23:39.037585       1 shared_informer.go:240] Waiting for caches to sync for disruption
	I0526 21:25:14.177097  527485 command_runner.go:124] ! I0526 21:23:39.286540       1 controllermanager.go:554] Started "clusterrole-aggregation"
	I0526 21:25:14.177111  527485 command_runner.go:124] ! I0526 21:23:39.286598       1 clusterroleaggregation_controller.go:149] Starting ClusterRoleAggregator
	I0526 21:25:14.177122  527485 command_runner.go:124] ! I0526 21:23:39.286605       1 shared_informer.go:240] Waiting for caches to sync for ClusterRoleAggregator
	I0526 21:25:14.177132  527485 command_runner.go:124] ! I0526 21:23:39.537304       1 controllermanager.go:554] Started "pvc-protection"
	I0526 21:25:14.177151  527485 command_runner.go:124] ! I0526 21:23:39.537579       1 pvc_protection_controller.go:110] Starting PVC protection controller
	I0526 21:25:14.177169  527485 command_runner.go:124] ! I0526 21:23:39.537670       1 shared_informer.go:240] Waiting for caches to sync for PVC protection
	I0526 21:25:14.177185  527485 command_runner.go:124] ! I0526 21:23:39.786982       1 controllermanager.go:554] Started "pv-protection"
	I0526 21:25:14.177200  527485 command_runner.go:124] ! I0526 21:23:39.787110       1 pv_protection_controller.go:83] Starting PV protection controller
	I0526 21:25:14.177212  527485 command_runner.go:124] ! I0526 21:23:39.787118       1 shared_informer.go:240] Waiting for caches to sync for PV protection
	I0526 21:25:14.177222  527485 command_runner.go:124] ! I0526 21:23:40.036383       1 controllermanager.go:554] Started "endpoint"
	I0526 21:25:14.177235  527485 command_runner.go:124] ! I0526 21:23:40.036415       1 endpoints_controller.go:184] Starting endpoint controller
	I0526 21:25:14.177250  527485 command_runner.go:124] ! I0526 21:23:40.037058       1 shared_informer.go:240] Waiting for caches to sync for endpoint
	I0526 21:25:14.177263  527485 command_runner.go:124] ! I0526 21:23:40.288607       1 controllermanager.go:554] Started "podgc"
	I0526 21:25:14.177276  527485 command_runner.go:124] ! I0526 21:23:40.288827       1 gc_controller.go:89] Starting GC controller
	I0526 21:25:14.177289  527485 command_runner.go:124] ! I0526 21:23:40.289411       1 shared_informer.go:240] Waiting for caches to sync for GC
	I0526 21:25:14.177307  527485 command_runner.go:124] ! W0526 21:23:40.988861       1 shared_informer.go:494] resyncPeriod 13h30m7.5724073s is smaller than resyncCheckPeriod 19h40m47.70464655s and the informer has already started. Changing it to 19h40m47.70464655s
	I0526 21:25:14.177323  527485 command_runner.go:124] ! I0526 21:23:40.989960       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for serviceaccounts
	I0526 21:25:14.177394  527485 command_runner.go:124] ! I0526 21:23:40.990215       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for daemonsets.apps
	I0526 21:25:14.177421  527485 command_runner.go:124] ! I0526 21:23:40.990426       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for cronjobs.batch
	I0526 21:25:14.177437  527485 command_runner.go:124] ! I0526 21:23:40.990971       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for rolebindings.rbac.authorization.k8s.io
	I0526 21:25:14.177455  527485 command_runner.go:124] ! I0526 21:23:40.991569       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for horizontalpodautoscalers.autoscaling
	I0526 21:25:14.177473  527485 command_runner.go:124] ! I0526 21:23:40.991963       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for poddisruptionbudgets.policy
	I0526 21:25:14.177489  527485 command_runner.go:124] ! I0526 21:23:40.992141       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for jobs.batch
	I0526 21:25:14.177506  527485 command_runner.go:124] ! I0526 21:23:40.992301       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for endpointslices.discovery.k8s.io
	I0526 21:25:14.177526  527485 command_runner.go:124] ! I0526 21:23:40.992532       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for podtemplates
	I0526 21:25:14.177544  527485 command_runner.go:124] ! W0526 21:23:40.992690       1 shared_informer.go:494] resyncPeriod 13h37m25.694603534s is smaller than resyncCheckPeriod 19h40m47.70464655s and the informer has already started. Changing it to 19h40m47.70464655s
	I0526 21:25:14.177565  527485 command_runner.go:124] ! I0526 21:23:40.993075       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for controllerrevisions.apps
	I0526 21:25:14.177585  527485 command_runner.go:124] ! I0526 21:23:40.993243       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for networkpolicies.networking.k8s.io
	I0526 21:25:14.177601  527485 command_runner.go:124] ! I0526 21:23:40.993580       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for limitranges
	I0526 21:25:14.177618  527485 command_runner.go:124] ! I0526 21:23:40.993747       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for ingresses.networking.k8s.io
	I0526 21:25:14.177633  527485 command_runner.go:124] ! I0526 21:23:40.993780       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for ingresses.extensions
	I0526 21:25:14.177649  527485 command_runner.go:124] ! I0526 21:23:40.993805       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for leases.coordination.k8s.io
	I0526 21:25:14.177666  527485 command_runner.go:124] ! I0526 21:23:40.993841       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for statefulsets.apps
	I0526 21:25:14.177684  527485 command_runner.go:124] ! I0526 21:23:40.993861       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for replicasets.apps
	I0526 21:25:14.177704  527485 command_runner.go:124] ! I0526 21:23:40.993876       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for deployments.apps
	I0526 21:25:14.177720  527485 command_runner.go:124] ! I0526 21:23:40.993891       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for endpoints
	I0526 21:25:14.177733  527485 command_runner.go:124] ! I0526 21:23:40.993951       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for events.events.k8s.io
	I0526 21:25:14.177746  527485 command_runner.go:124] ! I0526 21:23:40.993980       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for roles.rbac.authorization.k8s.io
	I0526 21:25:14.177764  527485 command_runner.go:124] ! I0526 21:23:40.994082       1 controllermanager.go:554] Started "resourcequota"
	I0526 21:25:14.177792  527485 command_runner.go:124] ! I0526 21:23:40.994178       1 resource_quota_controller.go:273] Starting resource quota controller
	I0526 21:25:14.177807  527485 command_runner.go:124] ! I0526 21:23:40.994191       1 shared_informer.go:240] Waiting for caches to sync for resource quota
	I0526 21:25:14.177820  527485 command_runner.go:124] ! I0526 21:23:40.994219       1 resource_quota_monitor.go:304] QuotaMonitor running
	I0526 21:25:14.177829  527485 command_runner.go:124] ! I0526 21:23:41.028175       1 controllermanager.go:554] Started "namespace"
	I0526 21:25:14.177838  527485 command_runner.go:124] ! I0526 21:23:41.028716       1 namespace_controller.go:200] Starting namespace controller
	I0526 21:25:14.177853  527485 command_runner.go:124] ! I0526 21:23:41.028992       1 shared_informer.go:240] Waiting for caches to sync for namespace
	I0526 21:25:14.177863  527485 command_runner.go:124] ! I0526 21:23:41.051981       1 controllermanager.go:554] Started "ttl"
	I0526 21:25:14.177874  527485 command_runner.go:124] ! I0526 21:23:41.052926       1 ttl_controller.go:121] Starting TTL controller
	I0526 21:25:14.177886  527485 command_runner.go:124] ! I0526 21:23:41.053383       1 shared_informer.go:240] Waiting for caches to sync for TTL
	I0526 21:25:14.177899  527485 command_runner.go:124] ! I0526 21:23:41.289145       1 controllermanager.go:554] Started "attachdetach"
	I0526 21:25:14.177911  527485 command_runner.go:124] ! W0526 21:23:41.289246       1 controllermanager.go:546] Skipping "ttl-after-finished"
	I0526 21:25:14.177921  527485 command_runner.go:124] ! I0526 21:23:41.289282       1 attach_detach_controller.go:328] Starting attach detach controller
	I0526 21:25:14.177933  527485 command_runner.go:124] ! I0526 21:23:41.289291       1 shared_informer.go:240] Waiting for caches to sync for attach detach
	I0526 21:25:14.177946  527485 command_runner.go:124] ! I0526 21:23:41.537362       1 controllermanager.go:554] Started "serviceaccount"
	I0526 21:25:14.177958  527485 command_runner.go:124] ! I0526 21:23:41.537403       1 serviceaccounts_controller.go:117] Starting service account controller
	I0526 21:25:14.177973  527485 command_runner.go:124] ! I0526 21:23:41.538137       1 shared_informer.go:240] Waiting for caches to sync for service account
	I0526 21:25:14.177985  527485 command_runner.go:124] ! I0526 21:23:41.787243       1 controllermanager.go:554] Started "deployment"
	I0526 21:25:14.177997  527485 command_runner.go:124] ! I0526 21:23:41.788023       1 deployment_controller.go:153] Starting deployment controller
	I0526 21:25:14.178011  527485 command_runner.go:124] ! I0526 21:23:41.790417       1 shared_informer.go:240] Waiting for caches to sync for deployment
	I0526 21:25:14.178022  527485 command_runner.go:124] ! I0526 21:23:41.936235       1 controllermanager.go:554] Started "csrapproving"
	I0526 21:25:14.178036  527485 command_runner.go:124] ! I0526 21:23:41.936293       1 certificate_controller.go:118] Starting certificate controller "csrapproving"
	I0526 21:25:14.178051  527485 command_runner.go:124] ! I0526 21:23:41.936301       1 shared_informer.go:240] Waiting for caches to sync for certificate-csrapproving
	I0526 21:25:14.178070  527485 command_runner.go:124] ! I0526 21:23:42.137381       1 request.go:655] Throttling request took 1.048213324s, request: GET:https://192.168.39.229:8443/apis/extensions/v1beta1?timeout=32s
	I0526 21:25:14.178084  527485 command_runner.go:124] ! I0526 21:23:42.189224       1 node_ipam_controller.go:91] Sending events to api server.
	I0526 21:25:14.178099  527485 command_runner.go:124] ! I0526 21:23:52.210125       1 range_allocator.go:82] Sending events to api server.
	I0526 21:25:14.178115  527485 command_runner.go:124] ! I0526 21:23:52.211056       1 range_allocator.go:116] No Secondary Service CIDR provided. Skipping filtering out secondary service addresses.
	I0526 21:25:14.178125  527485 command_runner.go:124] ! I0526 21:23:52.211333       1 controllermanager.go:554] Started "nodeipam"
	I0526 21:25:14.178141  527485 command_runner.go:124] ! W0526 21:23:52.211708       1 core.go:246] configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes.
	I0526 21:25:14.178154  527485 command_runner.go:124] ! W0526 21:23:52.212021       1 controllermanager.go:546] Skipping "route"
	I0526 21:25:14.178167  527485 command_runner.go:124] ! I0526 21:23:52.212292       1 node_ipam_controller.go:159] Starting ipam controller
	I0526 21:25:14.178182  527485 command_runner.go:124] ! I0526 21:23:52.212876       1 shared_informer.go:240] Waiting for caches to sync for node
	I0526 21:25:14.178196  527485 command_runner.go:124] ! I0526 21:23:52.227871       1 node_lifecycle_controller.go:77] Sending events to api server
	I0526 21:25:14.178212  527485 command_runner.go:124] ! E0526 21:23:52.227991       1 core.go:232] failed to start cloud node lifecycle controller: no cloud provider provided
	I0526 21:25:14.178222  527485 command_runner.go:124] ! W0526 21:23:52.228003       1 controllermanager.go:546] Skipping "cloud-node-lifecycle"
	I0526 21:25:14.178235  527485 command_runner.go:124] ! I0526 21:23:52.257128       1 controllermanager.go:554] Started "persistentvolume-expander"
	I0526 21:25:14.178249  527485 command_runner.go:124] ! I0526 21:23:52.257967       1 expand_controller.go:310] Starting expand controller
	I0526 21:25:14.178263  527485 command_runner.go:124] ! I0526 21:23:52.258344       1 shared_informer.go:240] Waiting for caches to sync for expand
	I0526 21:25:14.178277  527485 command_runner.go:124] ! I0526 21:23:52.287731       1 controllermanager.go:554] Started "endpointslice"
	I0526 21:25:14.178293  527485 command_runner.go:124] ! I0526 21:23:52.287941       1 endpointslice_controller.go:237] Starting endpoint slice controller
	I0526 21:25:14.178308  527485 command_runner.go:124] ! I0526 21:23:52.287950       1 shared_informer.go:240] Waiting for caches to sync for endpoint_slice
	I0526 21:25:14.178318  527485 command_runner.go:124] ! I0526 21:23:52.334629       1 controllermanager.go:554] Started "daemonset"
	I0526 21:25:14.178329  527485 command_runner.go:124] ! I0526 21:23:52.334789       1 daemon_controller.go:285] Starting daemon sets controller
	I0526 21:25:14.178344  527485 command_runner.go:124] ! I0526 21:23:52.334797       1 shared_informer.go:240] Waiting for caches to sync for daemon sets
	I0526 21:25:14.178361  527485 command_runner.go:124] ! I0526 21:23:52.366633       1 controllermanager.go:554] Started "statefulset"
	I0526 21:25:14.178375  527485 command_runner.go:124] ! I0526 21:23:52.366920       1 stateful_set.go:146] Starting stateful set controller
	I0526 21:25:14.178389  527485 command_runner.go:124] ! I0526 21:23:52.367009       1 shared_informer.go:240] Waiting for caches to sync for stateful set
	I0526 21:25:14.178402  527485 command_runner.go:124] ! I0526 21:23:52.395670       1 controllermanager.go:554] Started "cronjob"
	I0526 21:25:14.178413  527485 command_runner.go:124] ! I0526 21:23:52.395842       1 cronjob_controller.go:96] Starting CronJob Manager
	I0526 21:25:14.178425  527485 command_runner.go:124] ! I0526 21:23:52.416080       1 certificate_controller.go:118] Starting certificate controller "csrsigning-kubelet-serving"
	I0526 21:25:14.178441  527485 command_runner.go:124] ! I0526 21:23:52.416256       1 shared_informer.go:240] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0526 21:25:14.178460  527485 command_runner.go:124] ! I0526 21:23:52.416385       1 dynamic_serving_content.go:130] Starting csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key
	I0526 21:25:14.178476  527485 command_runner.go:124] ! I0526 21:23:52.416862       1 certificate_controller.go:118] Starting certificate controller "csrsigning-kubelet-client"
	I0526 21:25:14.178491  527485 command_runner.go:124] ! I0526 21:23:52.416958       1 shared_informer.go:240] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0526 21:25:14.178504  527485 command_runner.go:124] ! I0526 21:23:52.416975       1 dynamic_serving_content.go:130] Starting csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key
	I0526 21:25:14.178520  527485 command_runner.go:124] ! I0526 21:23:52.417715       1 certificate_controller.go:118] Starting certificate controller "csrsigning-kube-apiserver-client"
	I0526 21:25:14.178537  527485 command_runner.go:124] ! I0526 21:23:52.417882       1 shared_informer.go:240] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0526 21:25:14.178556  527485 command_runner.go:124] ! I0526 21:23:52.418025       1 dynamic_serving_content.go:130] Starting csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key
	I0526 21:25:14.178570  527485 command_runner.go:124] ! I0526 21:23:52.418373       1 controllermanager.go:554] Started "csrsigning"
	I0526 21:25:14.178585  527485 command_runner.go:124] ! I0526 21:23:52.418419       1 certificate_controller.go:118] Starting certificate controller "csrsigning-legacy-unknown"
	I0526 21:25:14.178601  527485 command_runner.go:124] ! I0526 21:23:52.418799       1 dynamic_serving_content.go:130] Starting csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key
	I0526 21:25:14.178615  527485 command_runner.go:124] ! I0526 21:23:52.418805       1 shared_informer.go:240] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0526 21:25:14.178628  527485 command_runner.go:124] ! I0526 21:23:52.515732       1 controllermanager.go:554] Started "bootstrapsigner"
	I0526 21:25:14.178643  527485 command_runner.go:124] ! I0526 21:23:52.516431       1 shared_informer.go:240] Waiting for caches to sync for bootstrap_signer
	I0526 21:25:14.178658  527485 command_runner.go:124] ! I0526 21:23:52.765741       1 controllermanager.go:554] Started "replicationcontroller"
	I0526 21:25:14.178672  527485 command_runner.go:124] ! I0526 21:23:52.765769       1 replica_set.go:182] Starting replicationcontroller controller
	I0526 21:25:14.178686  527485 command_runner.go:124] ! I0526 21:23:52.765867       1 shared_informer.go:240] Waiting for caches to sync for ReplicationController
	I0526 21:25:14.178696  527485 command_runner.go:124] ! I0526 21:23:52.915756       1 node_lifecycle_controller.go:380] Sending events to api server.
	I0526 21:25:14.178708  527485 command_runner.go:124] ! I0526 21:23:52.916150       1 taint_manager.go:163] Sending events to api server.
	I0526 21:25:14.178721  527485 command_runner.go:124] ! I0526 21:23:52.916342       1 node_lifecycle_controller.go:508] Controller will reconcile labels.
	I0526 21:25:14.178731  527485 command_runner.go:124] ! I0526 21:23:52.916386       1 controllermanager.go:554] Started "nodelifecycle"
	I0526 21:25:14.178745  527485 command_runner.go:124] ! I0526 21:23:52.916749       1 node_lifecycle_controller.go:542] Starting node controller
	I0526 21:25:14.178758  527485 command_runner.go:124] ! I0526 21:23:52.916921       1 shared_informer.go:240] Waiting for caches to sync for taint
	I0526 21:25:14.178769  527485 command_runner.go:124] ! I0526 21:23:53.165965       1 controllermanager.go:554] Started "job"
	I0526 21:25:14.178785  527485 command_runner.go:124] ! I0526 21:23:53.166025       1 job_controller.go:148] Starting job controller
	I0526 21:25:14.178793  527485 command_runner.go:124] ! I0526 21:23:53.167211       1 shared_informer.go:240] Waiting for caches to sync for job
	I0526 21:25:14.178804  527485 command_runner.go:124] ! I0526 21:23:53.170385       1 shared_informer.go:240] Waiting for caches to sync for resource quota
	I0526 21:25:14.178829  527485 command_runner.go:124] ! W0526 21:23:53.178965       1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-20210526212238-510955" does not exist
	I0526 21:25:14.178845  527485 command_runner.go:124] ! I0526 21:23:53.213010       1 shared_informer.go:247] Caches are synced for node 
	I0526 21:25:14.178861  527485 command_runner.go:124] ! I0526 21:23:53.213735       1 range_allocator.go:172] Starting range CIDR allocator
	I0526 21:25:14.178876  527485 command_runner.go:124] ! I0526 21:23:53.214071       1 shared_informer.go:240] Waiting for caches to sync for cidrallocator
	I0526 21:25:14.178887  527485 command_runner.go:124] ! I0526 21:23:53.214233       1 shared_informer.go:247] Caches are synced for cidrallocator 
	I0526 21:25:14.178899  527485 command_runner.go:124] ! I0526 21:23:53.215982       1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring 
	I0526 21:25:14.178915  527485 command_runner.go:124] ! I0526 21:23:53.216587       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-serving 
	I0526 21:25:14.178930  527485 command_runner.go:124] ! I0526 21:23:53.217085       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-client 
	I0526 21:25:14.178958  527485 command_runner.go:124] ! I0526 21:23:53.217522       1 shared_informer.go:247] Caches are synced for bootstrap_signer 
	I0526 21:25:14.178974  527485 command_runner.go:124] ! I0526 21:23:53.218215       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kube-apiserver-client 
	I0526 21:25:14.178985  527485 command_runner.go:124] ! I0526 21:23:53.218891       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-legacy-unknown 
	I0526 21:25:14.178998  527485 command_runner.go:124] ! I0526 21:23:53.229560       1 shared_informer.go:247] Caches are synced for namespace 
	I0526 21:25:14.179013  527485 command_runner.go:124] ! I0526 21:23:53.235029       1 shared_informer.go:247] Caches are synced for daemon sets 
	I0526 21:25:14.179025  527485 command_runner.go:124] ! I0526 21:23:53.238654       1 shared_informer.go:247] Caches are synced for service account 
	I0526 21:25:14.179039  527485 command_runner.go:124] ! I0526 21:23:53.240824       1 shared_informer.go:247] Caches are synced for endpoint 
	I0526 21:25:14.179054  527485 command_runner.go:124] ! I0526 21:23:53.247379       1 shared_informer.go:247] Caches are synced for certificate-csrapproving 
	I0526 21:25:14.179068  527485 command_runner.go:124] ! I0526 21:23:53.251558       1 shared_informer.go:247] Caches are synced for PVC protection 
	I0526 21:25:14.179079  527485 command_runner.go:124] ! I0526 21:23:53.252699       1 shared_informer.go:247] Caches are synced for ReplicaSet 
	I0526 21:25:14.179088  527485 command_runner.go:124] ! I0526 21:23:53.256544       1 shared_informer.go:247] Caches are synced for TTL 
	I0526 21:25:14.179104  527485 command_runner.go:124] ! I0526 21:23:53.265652       1 range_allocator.go:373] Set node multinode-20210526212238-510955 PodCIDR to [10.244.0.0/24]
	I0526 21:25:14.179117  527485 command_runner.go:124] ! I0526 21:23:53.268627       1 shared_informer.go:247] Caches are synced for job 
	I0526 21:25:14.179128  527485 command_runner.go:124] ! I0526 21:23:53.268752       1 shared_informer.go:247] Caches are synced for stateful set 
	I0526 21:25:14.179144  527485 command_runner.go:124] ! I0526 21:23:53.290037       1 shared_informer.go:247] Caches are synced for crt configmap 
	I0526 21:25:14.179159  527485 command_runner.go:124] ! I0526 21:23:53.290226       1 shared_informer.go:247] Caches are synced for endpoint_slice 
	I0526 21:25:14.179173  527485 command_runner.go:124] ! I0526 21:23:53.292847       1 shared_informer.go:247] Caches are synced for deployment 
	I0526 21:25:14.179184  527485 command_runner.go:124] ! I0526 21:23:53.293728       1 shared_informer.go:247] Caches are synced for GC 
	I0526 21:25:14.179196  527485 command_runner.go:124] ! I0526 21:23:53.293879       1 shared_informer.go:247] Caches are synced for HPA 
	I0526 21:25:14.179211  527485 command_runner.go:124] ! I0526 21:23:53.293974       1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator 
	I0526 21:25:14.179225  527485 command_runner.go:124] ! I0526 21:23:53.317816       1 shared_informer.go:247] Caches are synced for taint 
	I0526 21:25:14.179241  527485 command_runner.go:124] ! I0526 21:23:53.317927       1 node_lifecycle_controller.go:1429] Initializing eviction metric for zone: 
	I0526 21:25:14.179259  527485 command_runner.go:124] ! W0526 21:23:53.318278       1 node_lifecycle_controller.go:1044] Missing timestamp for Node multinode-20210526212238-510955. Assuming now as a timestamp.
	I0526 21:25:14.179273  527485 command_runner.go:124] ! I0526 21:23:53.318396       1 node_lifecycle_controller.go:1195] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
	I0526 21:25:14.179286  527485 command_runner.go:124] ! I0526 21:23:53.318775       1 taint_manager.go:187] Starting NoExecuteTaintManager
	I0526 21:25:14.179314  527485 command_runner.go:124] ! I0526 21:23:53.319750       1 event.go:291] "Event occurred" object="multinode-20210526212238-510955" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-20210526212238-510955 event: Registered Node multinode-20210526212238-510955 in Controller"
	I0526 21:25:14.179329  527485 command_runner.go:124] ! I0526 21:23:53.337883       1 shared_informer.go:247] Caches are synced for disruption 
	I0526 21:25:14.179342  527485 command_runner.go:124] ! I0526 21:23:53.337896       1 disruption.go:339] Sending events to api server.
	I0526 21:25:14.179356  527485 command_runner.go:124] ! I0526 21:23:53.368948       1 shared_informer.go:247] Caches are synced for ReplicationController 
	I0526 21:25:14.179372  527485 command_runner.go:124] ! I0526 21:23:53.431193       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-2wgbs"
	I0526 21:25:14.179399  527485 command_runner.go:124] ! I0526 21:23:53.431223       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-74ff55c5b to 2"
	I0526 21:25:14.179414  527485 command_runner.go:124] ! I0526 21:23:53.459736       1 shared_informer.go:247] Caches are synced for expand 
	I0526 21:25:14.179427  527485 command_runner.go:124] ! I0526 21:23:53.479631       1 shared_informer.go:247] Caches are synced for resource quota 
	I0526 21:25:14.179441  527485 command_runner.go:124] ! I0526 21:23:53.487838       1 shared_informer.go:247] Caches are synced for PV protection 
	I0526 21:25:14.179455  527485 command_runner.go:124] ! I0526 21:23:53.489356       1 shared_informer.go:247] Caches are synced for attach detach 
	I0526 21:25:14.179471  527485 command_runner.go:124] ! I0526 21:23:53.494672       1 shared_informer.go:247] Caches are synced for resource quota 
	I0526 21:25:14.179483  527485 command_runner.go:124] ! I0526 21:23:53.539359       1 shared_informer.go:247] Caches are synced for persistent volume 
	I0526 21:25:14.179501  527485 command_runner.go:124] ! I0526 21:23:53.545401       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-qbl42"
	I0526 21:25:14.179525  527485 command_runner.go:124] ! I0526 21:23:53.545422       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-z56bv"
	I0526 21:25:14.179548  527485 command_runner.go:124] ! I0526 21:23:53.556102       1 event.go:291] "Event occurred" object="kube-system/kube-apiserver-multinode-20210526212238-510955" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0526 21:25:14.179571  527485 command_runner.go:124] ! I0526 21:23:53.567036       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-tw67b"
	I0526 21:25:14.179590  527485 command_runner.go:124] ! E0526 21:23:53.635384       1 clusterroleaggregation_controller.go:181] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
	I0526 21:25:14.179607  527485 command_runner.go:124] ! I0526 21:23:53.689947       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	I0526 21:25:14.179630  527485 command_runner.go:124] ! I0526 21:23:53.733785       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-74ff55c5b to 1"
	I0526 21:25:14.179653  527485 command_runner.go:124] ! I0526 21:23:53.758013       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-74ff55c5b-z56bv"
	I0526 21:25:14.179668  527485 command_runner.go:124] ! I0526 21:23:53.906201       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0526 21:25:14.179683  527485 command_runner.go:124] ! I0526 21:23:53.937294       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0526 21:25:14.179695  527485 command_runner.go:124] ! I0526 21:23:53.937309       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0526 21:25:14.179714  527485 command_runner.go:124] ! I0526 21:24:08.320331       1 node_lifecycle_controller.go:1222] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I0526 21:25:14.189022  527485 logs.go:123] Gathering logs for containerd ...
	I0526 21:25:14.189041  527485 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0526 21:25:14.204280  527485 command_runner.go:124] > -- Logs begin at Wed 2021-05-26 21:22:49 UTC, end at Wed 2021-05-26 21:25:14 UTC. --
	I0526 21:25:14.204307  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 systemd[1]: Starting containerd container runtime...
	I0526 21:25:14.204320  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 systemd[1]: Started containerd container runtime.
	I0526 21:25:14.204341  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.412639957Z" level=info msg="starting containerd" revision=05f951a3781f4f2c1911b05e61c160e9c30eaa8e version=v1.4.4
	I0526 21:25:14.204368  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.454795725Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0526 21:25:14.204394  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.455022736Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0526 21:25:14.204432  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.456819758Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/4.19.182\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0526 21:25:14.204457  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.456940685Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0526 21:25:14.204492  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.457199432Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0526 21:25:14.204526  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.457299817Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0526 21:25:14.204552  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.457342626Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0526 21:25:14.204575  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.457353348Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0526 21:25:14.204599  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.457375564Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0526 21:25:14.204624  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.457518971Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0526 21:25:14.204658  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.457752665Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0526 21:25:14.204683  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.457768067Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0526 21:25:14.204709  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.457801760Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0526 21:25:14.204731  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.457811694Z" level=info msg="metadata content store policy set" policy=shared
	I0526 21:25:14.204760  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.461742670Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0526 21:25:14.204788  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.461851430Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0526 21:25:14.204816  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462036878Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0526 21:25:14.204840  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462069131Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0526 21:25:14.204885  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462082171Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0526 21:25:14.204912  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462094524Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0526 21:25:14.204930  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462115116Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0526 21:25:14.204947  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462127721Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1
	I0526 21:25:14.204963  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462139766Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0526 21:25:14.204981  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462157542Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0526 21:25:14.204997  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462167923Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0526 21:25:14.205015  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462295610Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0526 21:25:14.205033  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462357720Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0526 21:25:14.205049  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462745295Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0526 21:25:14.205064  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462770123Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0526 21:25:14.205079  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462815565Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0526 21:25:14.205095  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462827921Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0526 21:25:14.205109  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462846347Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0526 21:25:14.205126  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462857513Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0526 21:25:14.205141  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462870788Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0526 21:25:14.205158  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462881154Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0526 21:25:14.205191  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462892049Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0526 21:25:14.205207  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462903002Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0526 21:25:14.205222  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462913917Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0526 21:25:14.205239  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462958764Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0526 21:25:14.205255  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462972025Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0526 21:25:14.205275  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462983386Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0526 21:25:14.205290  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462994704Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1
	I0526 21:25:14.205308  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.463133131Z" level=warning msg="failed to load plugin io.containerd.grpc.v1.cri" error="invalid plugin config: `systemd_cgroup` only works for runtime io.containerd.runtime.v1.linux"
	I0526 21:25:14.205323  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.463145276Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0526 21:25:14.205337  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.463363744Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	I0526 21:25:14.205351  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.463401676Z" level=info msg=serving... address=/run/containerd/containerd.sock
	I0526 21:25:14.205364  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.463415404Z" level=info msg="containerd successfully booted in 0.052163s"
	I0526 21:25:14.205375  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 systemd[1]: Stopping containerd container runtime...
	I0526 21:25:14.205385  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 systemd[1]: containerd.service: Succeeded.
	I0526 21:25:14.205394  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 systemd[1]: Stopped containerd container runtime.
	I0526 21:25:14.205403  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 systemd[1]: Starting containerd container runtime...
	I0526 21:25:14.205412  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 systemd[1]: Started containerd container runtime.
	I0526 21:25:14.205425  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.677351233Z" level=info msg="starting containerd" revision=05f951a3781f4f2c1911b05e61c160e9c30eaa8e version=v1.4.4
	I0526 21:25:14.205440  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.703735354Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0526 21:25:14.205455  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.703939180Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0526 21:25:14.205478  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.706070962Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/4.19.182\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0526 21:25:14.205496  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.706222939Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0526 21:25:14.205519  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.706683734Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0526 21:25:14.205537  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.706837938Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0526 21:25:14.205554  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.706963959Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0526 21:25:14.205571  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.707081760Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0526 21:25:14.205586  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.707216688Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0526 21:25:14.205602  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.707381113Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0526 21:25:14.205624  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.707841019Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0526 21:25:14.205641  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.707973506Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0526 21:25:14.205656  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.708095816Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0526 21:25:14.205670  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.708236930Z" level=info msg="metadata content store policy set" policy=shared
	I0526 21:25:14.205685  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.708536776Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0526 21:25:14.205703  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.708698510Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0526 21:25:14.205727  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.708937323Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0526 21:25:14.205751  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.709074999Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0526 21:25:14.205774  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.709196994Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0526 21:25:14.205800  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.709315424Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0526 21:25:14.205822  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.709506686Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0526 21:25:14.205841  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.709629192Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1
	I0526 21:25:14.205875  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.709743913Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0526 21:25:14.205900  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.709857985Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0526 21:25:14.205924  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.709979410Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0526 21:25:14.205946  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.710125076Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0526 21:25:14.205970  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.710271949Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0526 21:25:14.205994  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.710830775Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0526 21:25:14.206016  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.710974791Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0526 21:25:14.206032  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.711117145Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0526 21:25:14.206047  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.711243334Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0526 21:25:14.206065  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.711363735Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0526 21:25:14.206080  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.711549081Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0526 21:25:14.206094  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.711666234Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0526 21:25:14.206109  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.711781506Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0526 21:25:14.206124  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.711895813Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0526 21:25:14.206139  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.712013139Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0526 21:25:14.206153  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.712131897Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0526 21:25:14.206168  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.712269473Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0526 21:25:14.206184  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.712503525Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0526 21:25:14.206200  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.712659007Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0526 21:25:14.206217  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.712779064Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1
	I0526 21:25:14.206236  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.712986218Z" level=warning msg="`default_runtime` is deprecated, please use `default_runtime_name` to reference the default configuration you have defined in `runtimes`"
	I0526 21:25:14.206335  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.713141331Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:default DefaultRuntime:{Type:io.containerd.runc.v2 Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:0xc000155fb0 PrivilegedWithoutHostDevices:false BaseRuntimeSpec:} UntrustedWorkloadRuntime:{Type: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:<nil> PrivilegedWithoutHostDevices:false BaseRuntimeSpec:} Runtimes:map[default:{Type:io.containerd.runc.v2 Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:0xc000155fb0 PrivilegedWithoutHostDevices:false BaseRuntimeSpec:} runc:{Type:io.containerd.runc.v2 Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:0xc00037b050 PrivilegedWithoutHostDevices:false BaseRuntimeSpec:}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpac
kedLayers:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.mk NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate:} Registry:{Mirrors:map[docker.io:{Endpoints:[https://registry-1.docker.io]}] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:} DisableTCPService:true StreamServerAddress: StreamServerPort:10010 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:k8s.gcr.io/pause:3.2 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true IgnoreImageDefinedVolumes:false} ContainerdRootDir:/mnt/vda1/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/mnt/vda1/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/r
un/containerd/io.containerd.grpc.v1.cri}"
	I0526 21:25:14.206351  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.713322225Z" level=info msg="Connect containerd service"
	I0526 21:25:14.206370  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.713538361Z" level=info msg="Get image filesystem path \"/mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\""
	I0526 21:25:14.206403  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.714213931Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.mk: cni plugin not initialized: failed to load cni config"
	I0526 21:25:14.206429  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.714359921Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0526 21:25:14.206452  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.714868242Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	I0526 21:25:14.206474  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.715023618Z" level=info msg=serving... address=/run/containerd/containerd.sock
	I0526 21:25:14.206497  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.715142631Z" level=info msg="containerd successfully booted in 0.038760s"
	I0526 21:25:14.206519  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.726087774Z" level=info msg="Start subscribing containerd event"
	I0526 21:25:14.206538  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.726733995Z" level=info msg="Start recovering state"
	I0526 21:25:14.206559  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.781395051Z" level=info msg="Start event monitor"
	I0526 21:25:14.206578  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.781771001Z" level=info msg="Start snapshots syncer"
	I0526 21:25:14.206598  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.781893491Z" level=info msg="Start cni network conf syncer"
	I0526 21:25:14.206619  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.782003464Z" level=info msg="Start streaming server"
	I0526 21:25:14.206647  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:28.484581294Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:kube-controller-manager-multinode-20210526212238-510955,Uid:474c55dfb64741cc485e46b6bb9f2dc0,Namespace:kube-system,Attempt:0,}"
	I0526 21:25:14.206668  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:28.490843770Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:kube-scheduler-multinode-20210526212238-510955,Uid:6b4a0ee8b3d15a1c2e47c15d32e6eb0d,Namespace:kube-system,Attempt:0,}"
	I0526 21:25:14.206688  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:28.501056680Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:kube-apiserver-multinode-20210526212238-510955,Uid:b42b6879229f245abab6047de8662a2f,Namespace:kube-system,Attempt:0,}"
	I0526 21:25:14.206714  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:28.508591647Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:etcd-multinode-20210526212238-510955,Uid:34530b4d5ce1b17919f3b8976b2d0456,Namespace:kube-system,Attempt:0,}"
	I0526 21:25:14.206742  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:28.580716340Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fe43674906f2080850da99c25995a18c2583bfda5a6a21d58f51cb45f673d486 pid=2407
	I0526 21:25:14.206772  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:28.598809833Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/24fd8b8599a6ee5e09c19d4ce15908360ea29418f2bbe3b0ba2d12f73a3519fb pid=2435
	I0526 21:25:14.206801  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:28.602060491Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/73ada73fbbf0b2a7b4a40791347e9a5a366e1f52a347203f20a27bcb2813b6c5 pid=2434
	I0526 21:25:14.206831  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:28.602007310Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2ad404c6a9c449ae1ebfab12355673229979a8ee4cf4d87f94b5ca073d31b43e pid=2452
	I0526 21:25:14.206862  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:29.066808539Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-multinode-20210526212238-510955,Uid:b42b6879229f245abab6047de8662a2f,Namespace:kube-system,Attempt:0,} returns sandbox id \"fe43674906f2080850da99c25995a18c2583bfda5a6a21d58f51cb45f673d486\""
	I0526 21:25:14.206885  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:29.074803022Z" level=info msg="CreateContainer within sandbox \"fe43674906f2080850da99c25995a18c2583bfda5a6a21d58f51cb45f673d486\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}"
	I0526 21:25:14.206908  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:29.194718464Z" level=info msg="CreateContainer within sandbox \"fe43674906f2080850da99c25995a18c2583bfda5a6a21d58f51cb45f673d486\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a0581c0e5409bf38a71c61d4cf776a7fcd3dc38c24a5a6a52165552afd2bc85c\""
	I0526 21:25:14.206924  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:29.196219933Z" level=info msg="StartContainer for \"a0581c0e5409bf38a71c61d4cf776a7fcd3dc38c24a5a6a52165552afd2bc85c\""
	I0526 21:25:14.206949  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:29.262678371Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-multinode-20210526212238-510955,Uid:474c55dfb64741cc485e46b6bb9f2dc0,Namespace:kube-system,Attempt:0,} returns sandbox id \"73ada73fbbf0b2a7b4a40791347e9a5a366e1f52a347203f20a27bcb2813b6c5\""
	I0526 21:25:14.206969  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:29.272571919Z" level=info msg="CreateContainer within sandbox \"73ada73fbbf0b2a7b4a40791347e9a5a366e1f52a347203f20a27bcb2813b6c5\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}"
	I0526 21:25:14.206996  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:29.347228547Z" level=info msg="CreateContainer within sandbox \"73ada73fbbf0b2a7b4a40791347e9a5a366e1f52a347203f20a27bcb2813b6c5\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"2314e41b1b44395e7b5a7c32d6859941d4a5db601171f0dd18ddefb571d25f18\""
	I0526 21:25:14.207018  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:29.349365690Z" level=info msg="StartContainer for \"2314e41b1b44395e7b5a7c32d6859941d4a5db601171f0dd18ddefb571d25f18\""
	I0526 21:25:14.207044  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:29.419043703Z" level=info msg="StartContainer for \"a0581c0e5409bf38a71c61d4cf776a7fcd3dc38c24a5a6a52165552afd2bc85c\" returns successfully"
	I0526 21:25:14.207077  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:29.520520792Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-multinode-20210526212238-510955,Uid:6b4a0ee8b3d15a1c2e47c15d32e6eb0d,Namespace:kube-system,Attempt:0,} returns sandbox id \"24fd8b8599a6ee5e09c19d4ce15908360ea29418f2bbe3b0ba2d12f73a3519fb\""
	I0526 21:25:14.207107  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:29.527415671Z" level=info msg="CreateContainer within sandbox \"24fd8b8599a6ee5e09c19d4ce15908360ea29418f2bbe3b0ba2d12f73a3519fb\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}"
	I0526 21:25:14.207140  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:29.566421321Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:etcd-multinode-20210526212238-510955,Uid:34530b4d5ce1b17919f3b8976b2d0456,Namespace:kube-system,Attempt:0,} returns sandbox id \"2ad404c6a9c449ae1ebfab12355673229979a8ee4cf4d87f94b5ca073d31b43e\""
	I0526 21:25:14.207168  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:29.575850717Z" level=info msg="CreateContainer within sandbox \"2ad404c6a9c449ae1ebfab12355673229979a8ee4cf4d87f94b5ca073d31b43e\" for container &ContainerMetadata{Name:etcd,Attempt:0,}"
	I0526 21:25:14.207194  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:29.621335319Z" level=info msg="CreateContainer within sandbox \"24fd8b8599a6ee5e09c19d4ce15908360ea29418f2bbe3b0ba2d12f73a3519fb\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e6bb9bee7539aa8f41eb665706dd077c17b8c28a54090cf4fffe439978194c08\""
	I0526 21:25:14.207211  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:29.623169879Z" level=info msg="StartContainer for \"e6bb9bee7539aa8f41eb665706dd077c17b8c28a54090cf4fffe439978194c08\""
	I0526 21:25:14.207228  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:29.681255114Z" level=info msg="StartContainer for \"2314e41b1b44395e7b5a7c32d6859941d4a5db601171f0dd18ddefb571d25f18\" returns successfully"
	I0526 21:25:14.207250  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:29.683704929Z" level=info msg="CreateContainer within sandbox \"2ad404c6a9c449ae1ebfab12355673229979a8ee4cf4d87f94b5ca073d31b43e\" for &ContainerMetadata{Name:etcd,Attempt:0,} returns container id \"c8538106e966bebc2d596c1688fdb11c413a37e122bd18c454b4db5bec7f55ad\""
	I0526 21:25:14.207272  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:29.684577023Z" level=info msg="StartContainer for \"c8538106e966bebc2d596c1688fdb11c413a37e122bd18c454b4db5bec7f55ad\""
	I0526 21:25:14.207290  527485 command_runner.go:124] > May 26 21:23:30 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:30.017920282Z" level=info msg="StartContainer for \"c8538106e966bebc2d596c1688fdb11c413a37e122bd18c454b4db5bec7f55ad\" returns successfully"
	I0526 21:25:14.207307  527485 command_runner.go:124] > May 26 21:23:30 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:30.056525418Z" level=info msg="StartContainer for \"e6bb9bee7539aa8f41eb665706dd077c17b8c28a54090cf4fffe439978194c08\" returns successfully"
	I0526 21:25:14.207364  527485 command_runner.go:124] > May 26 21:23:53 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:53.290788536Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
	I0526 21:25:14.207386  527485 command_runner.go:124] > May 26 21:23:53 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:53.802102062Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:kindnet-2wgbs,Uid:aac3ff91-8f9c-4f4e-81fc-a859f780d67d,Namespace:kube-system,Attempt:0,}"
	I0526 21:25:14.207410  527485 command_runner.go:124] > May 26 21:23:53 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:53.839975209Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/53490c652b9e5b3b552f7ca74d9a84e9c42a1849a932e5f024b22d340d5734e8 pid=2987
	I0526 21:25:14.207437  527485 command_runner.go:124] > May 26 21:23:53 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:53.915628984Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:kube-proxy-qbl42,Uid:950a915d-c5f0-4e6f-bc12-ee97013032f0,Namespace:kube-system,Attempt:0,}"
	I0526 21:25:14.207469  527485 command_runner.go:124] > May 26 21:23:53 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:53.950847165Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/038c42970362d9798abb36c3983856aa352e67a59ca9ce5f3e1852c03634a59a pid=3013
	I0526 21:25:14.207500  527485 command_runner.go:124] > May 26 21:23:54 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:54.116312794Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qbl42,Uid:950a915d-c5f0-4e6f-bc12-ee97013032f0,Namespace:kube-system,Attempt:0,} returns sandbox id \"038c42970362d9798abb36c3983856aa352e67a59ca9ce5f3e1852c03634a59a\""
	I0526 21:25:14.207529  527485 command_runner.go:124] > May 26 21:23:54 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:54.127305490Z" level=info msg="CreateContainer within sandbox \"038c42970362d9798abb36c3983856aa352e67a59ca9ce5f3e1852c03634a59a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}"
	I0526 21:25:14.207562  527485 command_runner.go:124] > May 26 21:23:54 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:54.182202148Z" level=info msg="CreateContainer within sandbox \"038c42970362d9798abb36c3983856aa352e67a59ca9ce5f3e1852c03634a59a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"de6efc6fec4b20903efb849817a9e57e284841a9fc6ef9bd1b83e563316cdaa2\""
	I0526 21:25:14.207586  527485 command_runner.go:124] > May 26 21:23:54 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:54.188910123Z" level=info msg="StartContainer for \"de6efc6fec4b20903efb849817a9e57e284841a9fc6ef9bd1b83e563316cdaa2\""
	I0526 21:25:14.207608  527485 command_runner.go:124] > May 26 21:23:54 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:54.381612238Z" level=info msg="StartContainer for \"de6efc6fec4b20903efb849817a9e57e284841a9fc6ef9bd1b83e563316cdaa2\" returns successfully"
	I0526 21:25:14.207629  527485 command_runner.go:124] > May 26 21:23:55 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:55.674364903Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/library/minikube-local-cache-test:functional-20210526211257-510955,Labels:map[string]string{},XXX_unrecognized:[],}"
	I0526 21:25:14.207651  527485 command_runner.go:124] > May 26 21:23:55 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:55.683119285Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d019ff3187ef5660d1df17b8caf469d5fc50b72267134348e040397c4d49d830,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
	I0526 21:25:14.207674  527485 command_runner.go:124] > May 26 21:23:55 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:55.683711665Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/library/minikube-local-cache-test:functional-20210526211257-510955,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
	I0526 21:25:14.207692  527485 command_runner.go:124] > May 26 21:23:56 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:56.582858367Z" level=error msg="get state for 53490c652b9e5b3b552f7ca74d9a84e9c42a1849a932e5f024b22d340d5734e8" error="context deadline exceeded: unknown"
	I0526 21:25:14.207711  527485 command_runner.go:124] > May 26 21:23:56 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:56.582967226Z" level=warning msg="unknown status" status=0
	I0526 21:25:14.207744  527485 command_runner.go:124] > May 26 21:23:56 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:56.969753374Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kindnet-2wgbs,Uid:aac3ff91-8f9c-4f4e-81fc-a859f780d67d,Namespace:kube-system,Attempt:0,} returns sandbox id \"53490c652b9e5b3b552f7ca74d9a84e9c42a1849a932e5f024b22d340d5734e8\""
	I0526 21:25:14.207773  527485 command_runner.go:124] > May 26 21:23:56 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:56.975070195Z" level=info msg="CreateContainer within sandbox \"53490c652b9e5b3b552f7ca74d9a84e9c42a1849a932e5f024b22d340d5734e8\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:0,}"
	I0526 21:25:14.207807  527485 command_runner.go:124] > May 26 21:23:57 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:57.027887855Z" level=info msg="CreateContainer within sandbox \"53490c652b9e5b3b552f7ca74d9a84e9c42a1849a932e5f024b22d340d5734e8\" for &ContainerMetadata{Name:kindnet-cni,Attempt:0,} returns container id \"69df1859ce4d1a30c4660b7f63cb09e13d69f3813d39620e6ca8dc830b4388bf\""
	I0526 21:25:14.207833  527485 command_runner.go:124] > May 26 21:23:57 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:57.029566085Z" level=info msg="StartContainer for \"69df1859ce4d1a30c4660b7f63cb09e13d69f3813d39620e6ca8dc830b4388bf\""
	I0526 21:25:14.207858  527485 command_runner.go:124] > May 26 21:23:57 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:57.574608517Z" level=info msg="StartContainer for \"69df1859ce4d1a30c4660b7f63cb09e13d69f3813d39620e6ca8dc830b4388bf\" returns successfully"
	I0526 21:25:14.207886  527485 command_runner.go:124] > May 26 21:24:09 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:24:09.297649575Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:storage-provisioner,Uid:e29f2487-1ac5-4bb4-9ab8-6b5d5c43ec36,Namespace:kube-system,Attempt:0,}"
	I0526 21:25:14.207912  527485 command_runner.go:124] > May 26 21:24:09 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:24:09.323344186Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:coredns-74ff55c5b-tw67b,Uid:a0522c32-9960-4c21-8a5a-d0b137009166,Namespace:kube-system,Attempt:0,}"
	I0526 21:25:14.207941  527485 command_runner.go:124] > May 26 21:24:09 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:24:09.332120092Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/722b1b257c571a73cb4686c5476aba37030267ae95e826e47362be9c166adb55 pid=3313
	I0526 21:25:14.207969  527485 command_runner.go:124] > May 26 21:24:09 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:24:09.442356819Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1d96eb581f035bbd8a09d1caefefe610196dd7fb21d1b74e5f155bddc0a54900 pid=3376
	I0526 21:25:14.207994  527485 command_runner.go:124] > May 26 21:24:09 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:24:09.792546853Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:storage-provisioner,Uid:e29f2487-1ac5-4bb4-9ab8-6b5d5c43ec36,Namespace:kube-system,Attempt:0,} returns sandbox id \"722b1b257c571a73cb4686c5476aba37030267ae95e826e47362be9c166adb55\""
	I0526 21:25:14.208013  527485 command_runner.go:124] > May 26 21:24:09 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:24:09.796339883Z" level=info msg="CreateContainer within sandbox \"722b1b257c571a73cb4686c5476aba37030267ae95e826e47362be9c166adb55\" for container &ContainerMetadata{Name:storage-provisioner,Attempt:0,}"
	I0526 21:25:14.208035  527485 command_runner.go:124] > May 26 21:24:09 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:24:09.843281999Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-74ff55c5b-tw67b,Uid:a0522c32-9960-4c21-8a5a-d0b137009166,Namespace:kube-system,Attempt:0,} returns sandbox id \"1d96eb581f035bbd8a09d1caefefe610196dd7fb21d1b74e5f155bddc0a54900\""
	I0526 21:25:14.208056  527485 command_runner.go:124] > May 26 21:24:09 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:24:09.849108598Z" level=info msg="CreateContainer within sandbox \"1d96eb581f035bbd8a09d1caefefe610196dd7fb21d1b74e5f155bddc0a54900\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
	I0526 21:25:14.208079  527485 command_runner.go:124] > May 26 21:24:09 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:24:09.875948742Z" level=info msg="CreateContainer within sandbox \"722b1b257c571a73cb4686c5476aba37030267ae95e826e47362be9c166adb55\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"5d3df8c94eaedafbc7bb1e79012cbc72bc2ec88687f4bfc6f10fff8c1eb3107d\""
	I0526 21:25:14.208094  527485 command_runner.go:124] > May 26 21:24:09 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:24:09.879073015Z" level=info msg="StartContainer for \"5d3df8c94eaedafbc7bb1e79012cbc72bc2ec88687f4bfc6f10fff8c1eb3107d\""
	I0526 21:25:14.208116  527485 command_runner.go:124] > May 26 21:24:09 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:24:09.915826719Z" level=info msg="CreateContainer within sandbox \"1d96eb581f035bbd8a09d1caefefe610196dd7fb21d1b74e5f155bddc0a54900\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a9593dff4428d4d0f5cc31832cefb42bf39b62a98d32d295dd9502a1ef6d307a\""
	I0526 21:25:14.208131  527485 command_runner.go:124] > May 26 21:24:09 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:24:09.918179651Z" level=info msg="StartContainer for \"a9593dff4428d4d0f5cc31832cefb42bf39b62a98d32d295dd9502a1ef6d307a\""
	I0526 21:25:14.208149  527485 command_runner.go:124] > May 26 21:24:10 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:24:10.083539707Z" level=info msg="StartContainer for \"5d3df8c94eaedafbc7bb1e79012cbc72bc2ec88687f4bfc6f10fff8c1eb3107d\" returns successfully"
	I0526 21:25:14.208166  527485 command_runner.go:124] > May 26 21:24:10 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:24:10.120722012Z" level=info msg="StartContainer for \"a9593dff4428d4d0f5cc31832cefb42bf39b62a98d32d295dd9502a1ef6d307a\" returns successfully"
	I0526 21:25:14.226621  527485 logs.go:123] Gathering logs for kubelet ...
	I0526 21:25:14.226646  527485 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0526 21:25:14.238565  527485 command_runner.go:124] > -- Logs begin at Wed 2021-05-26 21:22:49 UTC, end at Wed 2021-05-26 21:25:14 UTC. --
	I0526 21:25:14.238594  527485 command_runner.go:124] > May 26 21:23:21 multinode-20210526212238-510955 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0526 21:25:14.238624  527485 command_runner.go:124] > May 26 21:23:21 multinode-20210526212238-510955 kubelet[2343]: Flag --runtime-request-timeout has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	I0526 21:25:14.238677  527485 command_runner.go:124] > May 26 21:23:21 multinode-20210526212238-510955 kubelet[2343]: Flag --runtime-request-timeout has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	I0526 21:25:14.238691  527485 command_runner.go:124] > May 26 21:23:22 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:22.365155    2343 server.go:416] Version: v1.20.2
	I0526 21:25:14.238717  527485 command_runner.go:124] > May 26 21:23:22 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:22.365664    2343 server.go:837] Client rotation is on, will bootstrap in background
	I0526 21:25:14.238744  527485 command_runner.go:124] > May 26 21:23:22 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:22.382328    2343 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	I0526 21:25:14.238775  527485 command_runner.go:124] > May 26 21:23:22 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:22.383887    2343 certificate_manager.go:437] Failed while requesting a signed certificate from the master: cannot create certificate signing request: Post "https://control-plane.minikube.internal:8443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:14.238812  527485 command_runner.go:124] > May 26 21:23:24 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:24.586559    2343 certificate_manager.go:437] Failed while requesting a signed certificate from the master: cannot create certificate signing request: Post "https://control-plane.minikube.internal:8443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:14.238838  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.392858    2343 server.go:645] --cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /
	I0526 21:25:14.238861  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.393993    2343 container_manager_linux.go:274] container manager verified user specified cgroup-root exists: []
	I0526 21:25:14.238911  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.394298    2343 container_manager_linux.go:279] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:remote CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalTopologyManagerScope:container ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none}
	I0526 21:25:14.238945  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.395126    2343 topology_manager.go:120] [topologymanager] Creating topology manager with none policy per container scope
	I0526 21:25:14.238958  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.395334    2343 container_manager_linux.go:310] [topologymanager] Initializing Topology Manager with none policy and container-level scope
	I0526 21:25:14.238972  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.395348    2343 container_manager_linux.go:315] Creating device plugin manager: true
	I0526 21:25:14.238984  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.395816    2343 remote_runtime.go:62] parsed scheme: ""
	I0526 21:25:14.239001  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.395929    2343 remote_runtime.go:62] scheme "" not registered, fallback to default scheme
	I0526 21:25:14.239020  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.396315    2343 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}
	I0526 21:25:14.239033  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.396571    2343 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0526 21:25:14.239045  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.397666    2343 remote_image.go:50] parsed scheme: ""
	I0526 21:25:14.239058  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.397691    2343 remote_image.go:50] scheme "" not registered, fallback to default scheme
	I0526 21:25:14.239075  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.397829    2343 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}
	I0526 21:25:14.239089  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.397957    2343 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0526 21:25:14.239103  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.400786    2343 kubelet.go:262] Adding pod path: /etc/kubernetes/manifests
	I0526 21:25:14.239115  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.401761    2343 kubelet.go:273] Watching apiserver
	I0526 21:25:14.239137  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:27.419726    2343 reflector.go:138] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://control-plane.minikube.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%!D(MISSING)multinode-20210526212238-510955&limit=500&resourceVersion=0": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:14.239159  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:27.433343    2343 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:14.239174  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.434846    2343 kuberuntime_manager.go:216] Container runtime containerd initialized, version: v1.4.4, apiVersion: v1alpha2
	I0526 21:25:14.239199  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:27.435179    2343 reflector.go:138] k8s.io/kubernetes/pkg/kubelet/kubelet.go:438: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-20210526212238-510955&limit=500&resourceVersion=0": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:14.239215  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:27.695431    2343 aws_credentials.go:77] while getting AWS credentials NoCredentialProviders: no valid providers in chain. Deprecated.
	I0526 21:25:14.239231  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]:         For verbose messaging see aws.Config.CredentialsChainVerboseErrors
	I0526 21:25:14.239247  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: W0526 21:23:27.696850    2343 probe.go:268] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
	I0526 21:25:14.239264  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.698714    2343 server.go:1176] Started kubelet
	I0526 21:25:14.239277  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.699681    2343 server.go:148] Starting to listen on 0.0.0.0:10250
	I0526 21:25:14.239290  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.701131    2343 server.go:410] Adding debug handlers to kubelet server.
	I0526 21:25:14.239391  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:27.701698    2343 event.go:273] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"multinode-20210526212238-510955.1682bacd86c17a5a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"multinode-20210526212238-510955", UID:"multinode-20210526212238-510955", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.",
Source:v1.EventSource{Component:"kubelet", Host:"multinode-20210526212238-510955"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc023ccf3e9a5245a, ext:5868438524, loc:(*time.Location)(0x70d1080)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc023ccf3e9a5245a, ext:5868438524, loc:(*time.Location)(0x70d1080)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events": dial tcp 192.168.39.229:8443: connect: connection refused'(may retry after sleeping)
	I0526 21:25:14.239422  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.703923    2343 fs_resource_analyzer.go:64] Starting FS ResourceAnalyzer
	I0526 21:25:14.239432  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.707734    2343 volume_manager.go:271] Starting Kubelet Volume Manager
	I0526 21:25:14.239447  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.708096    2343 desired_state_of_world_populator.go:142] Desired state populator starts to run
	I0526 21:25:14.239471  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:27.708889    2343 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:14.239494  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:27.709701    2343 controller.go:144] failed to ensure lease exists, will retry in 200ms, error: Get "https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-20210526212238-510955?timeout=10s": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:14.239512  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:27.711040    2343 kubelet.go:2163] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	I0526 21:25:14.239524  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.711583    2343 client.go:86] parsed scheme: "unix"
	I0526 21:25:14.239538  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.711779    2343 client.go:86] scheme "unix" not registered, fallback to default scheme
	I0526 21:25:14.239554  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.712280    2343 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}
	I0526 21:25:14.239569  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.712673    2343 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0526 21:25:14.239583  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.782226    2343 cpu_manager.go:193] [cpumanager] starting with none policy
	I0526 21:25:14.239595  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.782318    2343 cpu_manager.go:194] [cpumanager] reconciling every 10s
	I0526 21:25:14.239606  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.782638    2343 state_mem.go:36] [cpumanager] initializing new in-memory state store
	I0526 21:25:14.239687  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:27.799125    2343 event.go:273] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"multinode-20210526212238-510955.1682bacd86c17a5a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"multinode-20210526212238-510955", UID:"multinode-20210526212238-510955", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.",
Source:v1.EventSource{Component:"kubelet", Host:"multinode-20210526212238-510955"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc023ccf3e9a5245a, ext:5868438524, loc:(*time.Location)(0x70d1080)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc023ccf3e9a5245a, ext:5868438524, loc:(*time.Location)(0x70d1080)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events": dial tcp 192.168.39.229:8443: connect: connection refused'(may retry after sleeping)
	I0526 21:25:14.239705  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.809183    2343 kubelet_node_status.go:71] Attempting to register node multinode-20210526212238-510955
	I0526 21:25:14.239737  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:27.810505    2343 kubelet_node_status.go:93] Unable to register node "multinode-20210526212238-510955" with API server: Post "https://control-plane.minikube.internal:8443/api/v1/nodes": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:14.239757  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:27.810636    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:14.239777  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.876097    2343 kubelet_network_linux.go:56] Initialized IPv4 iptables rules.
	I0526 21:25:14.239796  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.876127    2343 status_manager.go:158] Starting to sync pod status with apiserver
	I0526 21:25:14.239817  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.876145    2343 kubelet.go:1802] Starting kubelet main sync loop.
	I0526 21:25:14.239843  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:27.876191    2343 kubelet.go:1826] skipping pod synchronization - [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
	I0526 21:25:14.239878  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:27.877853    2343 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:14.239914  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:27.910604    2343 controller.go:144] failed to ensure lease exists, will retry in 400ms, error: Get "https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-20210526212238-510955?timeout=10s": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:14.239931  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:27.910787    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:14.239947  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:27.976408    2343 kubelet.go:1826] skipping pod synchronization - container runtime status check may not have completed yet
	I0526 21:25:14.239962  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.987845    2343 policy_none.go:43] [cpumanager] none policy: Start
	I0526 21:25:14.239997  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: W0526 21:23:28.000709    2343 manager.go:594] Failed to retrieve checkpoint for "kubelet_internal_checkpoint": checkpoint is not found
	I0526 21:25:14.240011  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:28.001042    2343 plugin_manager.go:114] Starting Kubelet Plugin Manager
	I0526 21:25:14.240028  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:28.004395    2343 eviction_manager.go:260] eviction manager: failed to get summary stats: failed to get node info: node "multinode-20210526212238-510955" not found
	I0526 21:25:14.240041  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:28.010900    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:14.240055  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:28.011906    2343 kubelet_node_status.go:71] Attempting to register node multinode-20210526212238-510955
	I0526 21:25:14.240076  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:28.012281    2343 kubelet_node_status.go:93] Unable to register node "multinode-20210526212238-510955" with API server: Post "https://control-plane.minikube.internal:8443/api/v1/nodes": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:14.240091  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:28.111839    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:14.240105  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:28.177382    2343 topology_manager.go:187] [topologymanager] Topology Admit Handler
	I0526 21:25:14.240119  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:28.180087    2343 topology_manager.go:187] [topologymanager] Topology Admit Handler
	I0526 21:25:14.240133  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:28.181373    2343 topology_manager.go:187] [topologymanager] Topology Admit Handler
	I0526 21:25:14.240146  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:28.182941    2343 topology_manager.go:187] [topologymanager] Topology Admit Handler
	I0526 21:25:14.240174  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: W0526 21:23:28.185069    2343 status_manager.go:550] Failed to get status for pod "kube-controller-manager-multinode-20210526212238-510955_kube-system(474c55dfb64741cc485e46b6bb9f2dc0)": Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20210526212238-510955": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:14.240200  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: W0526 21:23:28.185417    2343 status_manager.go:550] Failed to get status for pod "kube-scheduler-multinode-20210526212238-510955_kube-system(6b4a0ee8b3d15a1c2e47c15d32e6eb0d)": Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-20210526212238-510955": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:14.240226  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: W0526 21:23:28.201047    2343 status_manager.go:550] Failed to get status for pod "kube-apiserver-multinode-20210526212238-510955_kube-system(b42b6879229f245abab6047de8662a2f)": Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-20210526212238-510955": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:14.240250  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: W0526 21:23:28.202364    2343 status_manager.go:550] Failed to get status for pod "etcd-multinode-20210526212238-510955_kube-system(34530b4d5ce1b17919f3b8976b2d0456)": Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:14.240270  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:28.212215    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:14.240294  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:28.309602    2343 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/b42b6879229f245abab6047de8662a2f-ca-certs") pod "kube-apiserver-multinode-20210526212238-510955" (UID: "b42b6879229f245abab6047de8662a2f")
	I0526 21:25:14.240320  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:28.309839    2343 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/b42b6879229f245abab6047de8662a2f-k8s-certs") pod "kube-apiserver-multinode-20210526212238-510955" (UID: "b42b6879229f245abab6047de8662a2f")
	I0526 21:25:14.240344  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:28.310062    2343 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/b42b6879229f245abab6047de8662a2f-usr-share-ca-certificates") pod "kube-apiserver-multinode-20210526212238-510955" (UID: "b42b6879229f245abab6047de8662a2f")
	I0526 21:25:14.240370  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:28.310275    2343 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/474c55dfb64741cc485e46b6bb9f2dc0-ca-certs") pod "kube-controller-manager-multinode-20210526212238-510955" (UID: "474c55dfb64741cc485e46b6bb9f2dc0")
	I0526 21:25:14.240393  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:28.310572    2343 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/474c55dfb64741cc485e46b6bb9f2dc0-k8s-certs") pod "kube-controller-manager-multinode-20210526212238-510955" (UID: "474c55dfb64741cc485e46b6bb9f2dc0")
	I0526 21:25:14.240417  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:28.310900    2343 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/474c55dfb64741cc485e46b6bb9f2dc0-kubeconfig") pod "kube-controller-manager-multinode-20210526212238-510955" (UID: "474c55dfb64741cc485e46b6bb9f2dc0")
	I0526 21:25:14.240441  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:28.311066    2343 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-certs" (UniqueName: "kubernetes.io/host-path/34530b4d5ce1b17919f3b8976b2d0456-etcd-certs") pod "etcd-multinode-20210526212238-510955" (UID: "34530b4d5ce1b17919f3b8976b2d0456")
	I0526 21:25:14.240466  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:28.311200    2343 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "flexvolume-dir" (UniqueName: "kubernetes.io/host-path/474c55dfb64741cc485e46b6bb9f2dc0-flexvolume-dir") pod "kube-controller-manager-multinode-20210526212238-510955" (UID: "474c55dfb64741cc485e46b6bb9f2dc0")
	I0526 21:25:14.240491  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:28.311326    2343 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/474c55dfb64741cc485e46b6bb9f2dc0-usr-share-ca-certificates") pod "kube-controller-manager-multinode-20210526212238-510955" (UID: "474c55dfb64741cc485e46b6bb9f2dc0")
	I0526 21:25:14.240514  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:28.311324    2343 controller.go:144] failed to ensure lease exists, will retry in 800ms, error: Get "https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-20210526212238-510955?timeout=10s": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:14.240538  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:28.311643    2343 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/6b4a0ee8b3d15a1c2e47c15d32e6eb0d-kubeconfig") pod "kube-scheduler-multinode-20210526212238-510955" (UID: "6b4a0ee8b3d15a1c2e47c15d32e6eb0d")
	I0526 21:25:14.240561  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:28.311955    2343 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-data" (UniqueName: "kubernetes.io/host-path/34530b4d5ce1b17919f3b8976b2d0456-etcd-data") pod "etcd-multinode-20210526212238-510955" (UID: "34530b4d5ce1b17919f3b8976b2d0456")
	I0526 21:25:14.240574  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:28.312763    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:14.240599  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:28.318006    2343 reflector.go:138] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://control-plane.minikube.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%!D(MISSING)multinode-20210526212238-510955&limit=500&resourceVersion=0": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:14.240624  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:28.361617    2343 reflector.go:138] k8s.io/kubernetes/pkg/kubelet/kubelet.go:438: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-20210526212238-510955&limit=500&resourceVersion=0": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:14.240637  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:28.412938    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:14.240651  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:28.414299    2343 kubelet_node_status.go:71] Attempting to register node multinode-20210526212238-510955
	I0526 21:25:14.240671  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:28.420140    2343 kubelet_node_status.go:93] Unable to register node "multinode-20210526212238-510955" with API server: Post "https://control-plane.minikube.internal:8443/api/v1/nodes": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:14.240684  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:28.513925    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:14.240698  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:28.614235    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:14.240732  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:28.620010    2343 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:14.240753  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:28.714407    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:14.240788  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:28.717664    2343 certificate_manager.go:437] Failed while requesting a signed certificate from the master: cannot create certificate signing request: Post "https://control-plane.minikube.internal:8443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:14.240809  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:28.815037    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:14.240835  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:28.819848    2343 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:14.240849  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:28.915364    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:14.240877  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:29.015843    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:14.240902  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:29.112804    2343 controller.go:144] failed to ensure lease exists, will retry in 1.6s, error: Get "https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-20210526212238-510955?timeout=10s": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:14.240916  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:29.116234    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:14.240929  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:29.217167    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:14.240953  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:29.219890    2343 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:14.240967  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:29.223096    2343 kubelet_node_status.go:71] Attempting to register node multinode-20210526212238-510955
	I0526 21:25:14.240981  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:29.317528    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:14.240994  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:29.418231    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:14.241014  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:29.419707    2343 kubelet_node_status.go:93] Unable to register node "multinode-20210526212238-510955" with API server: Post "https://control-plane.minikube.internal:8443/api/v1/nodes": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:14.241027  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:29.520018    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:14.241040  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:29.620736    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:14.241053  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:29.721115    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:14.241071  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:29.821411    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:14.241093  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:29.921772    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:14.241113  527485 command_runner.go:124] > May 26 21:23:30 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:30.022147    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:14.241133  527485 command_runner.go:124] > May 26 21:23:30 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:30.122970    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:14.241153  527485 command_runner.go:124] > May 26 21:23:30 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:30.223407    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:14.241178  527485 command_runner.go:124] > May 26 21:23:30 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:30.323609    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:14.241196  527485 command_runner.go:124] > May 26 21:23:30 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:30.424033    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:14.241213  527485 command_runner.go:124] > May 26 21:23:30 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:30.524613    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:14.241227  527485 command_runner.go:124] > May 26 21:23:30 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:30.625186    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:14.241240  527485 command_runner.go:124] > May 26 21:23:30 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:30.725563    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:14.241255  527485 command_runner.go:124] > May 26 21:23:30 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:30.826076    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:14.241272  527485 command_runner.go:124] > May 26 21:23:30 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:30.932677    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:14.241287  527485 command_runner.go:124] > May 26 21:23:31 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:31.021296    2343 kubelet_node_status.go:71] Attempting to register node multinode-20210526212238-510955
	I0526 21:25:14.241343  527485 command_runner.go:124] > May 26 21:23:31 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:31.033185    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:14.241364  527485 command_runner.go:124] > May 26 21:23:31 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:31.133540    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:14.241383  527485 command_runner.go:124] > May 26 21:23:31 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:31.234158    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:14.241404  527485 command_runner.go:124] > May 26 21:23:31 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:31.334934    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:14.241425  527485 command_runner.go:124] > May 26 21:23:31 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:31.435265    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:14.241445  527485 command_runner.go:124] > May 26 21:23:31 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:31.535646    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:14.241462  527485 command_runner.go:124] > May 26 21:23:31 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:31.636091    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:14.241478  527485 command_runner.go:124] > May 26 21:23:31 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:31.736769    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:14.241490  527485 command_runner.go:124] > May 26 21:23:31 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:31.837337    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:14.241502  527485 command_runner.go:124] > May 26 21:23:31 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:31.937851    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:14.241524  527485 command_runner.go:124] > May 26 21:23:32 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:32.038171    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:14.241537  527485 command_runner.go:124] > May 26 21:23:32 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:32.138719    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:14.241548  527485 command_runner.go:124] > May 26 21:23:32 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:32.239058    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:14.241559  527485 command_runner.go:124] > May 26 21:23:32 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:32.339598    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:14.241572  527485 command_runner.go:124] > May 26 21:23:32 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:32.440290    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:14.241584  527485 command_runner.go:124] > May 26 21:23:32 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:32.540624    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:14.241596  527485 command_runner.go:124] > May 26 21:23:32 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:32.641006    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:14.241608  527485 command_runner.go:124] > May 26 21:23:32 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:32.741403    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:14.241619  527485 command_runner.go:124] > May 26 21:23:32 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:32.841966    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:14.241631  527485 command_runner.go:124] > May 26 21:23:32 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:32.942585    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:14.241647  527485 command_runner.go:124] > May 26 21:23:33 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:33.002095    2343 kubelet.go:2163] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	I0526 21:25:14.241661  527485 command_runner.go:124] > May 26 21:23:33 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:33.042747    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:14.241681  527485 command_runner.go:124] > May 26 21:23:33 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:33.142869    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:14.241695  527485 command_runner.go:124] > May 26 21:23:33 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:33.243254    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:14.241714  527485 command_runner.go:124] > May 26 21:23:33 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:33.343706    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:14.241735  527485 command_runner.go:124] > May 26 21:23:33 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:33.444105    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:14.241756  527485 command_runner.go:124] > May 26 21:23:33 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:33.545421    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:14.241777  527485 command_runner.go:124] > May 26 21:23:33 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:33.645867    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:14.241797  527485 command_runner.go:124] > May 26 21:23:33 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:33.746343    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:14.241816  527485 command_runner.go:124] > May 26 21:23:33 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:33.846868    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:14.241836  527485 command_runner.go:124] > May 26 21:23:33 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:33.947104    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:14.241856  527485 command_runner.go:124] > May 26 21:23:34 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:34.047842    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:14.241875  527485 command_runner.go:124] > May 26 21:23:34 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:34.148334    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:14.241898  527485 command_runner.go:124] > May 26 21:23:34 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:34.248550    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:14.241918  527485 command_runner.go:124] > May 26 21:23:34 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:34.349232    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:14.241938  527485 command_runner.go:124] > May 26 21:23:34 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:34.449632    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:14.241957  527485 command_runner.go:124] > May 26 21:23:34 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:34.549987    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:14.241977  527485 command_runner.go:124] > May 26 21:23:34 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:34.650314    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:14.241995  527485 command_runner.go:124] > May 26 21:23:34 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:34.751182    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:14.242009  527485 command_runner.go:124] > May 26 21:23:34 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:34.832693    2343 reconciler.go:157] Reconciler: start to sync state
	I0526 21:25:14.242025  527485 command_runner.go:124] > May 26 21:23:34 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:34.841269    2343 nodelease.go:49] failed to get node "multinode-20210526212238-510955" when trying to set owner ref to the node lease: nodes "multinode-20210526212238-510955" not found
	I0526 21:25:14.242040  527485 command_runner.go:124] > May 26 21:23:34 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:34.851652    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:14.242061  527485 command_runner.go:124] > May 26 21:23:34 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:34.952325    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:14.242084  527485 command_runner.go:124] > May 26 21:23:35 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:35.015600    2343 kubelet_node_status.go:74] Successfully registered node multinode-20210526212238-510955
	I0526 21:25:14.242111  527485 command_runner.go:124] > May 26 21:23:38 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:38.003372    2343 kubelet.go:2163] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	I0526 21:25:14.242134  527485 command_runner.go:124] > May 26 21:23:38 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:38.252332    2343 dynamic_cafile_content.go:182] Shutting down client-ca-bundle::/var/lib/minikube/certs/ca.crt
	I0526 21:25:14.242155  527485 command_runner.go:124] > May 26 21:23:38 multinode-20210526212238-510955 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	I0526 21:25:14.242171  527485 command_runner.go:124] > May 26 21:23:38 multinode-20210526212238-510955 systemd[1]: kubelet.service: Succeeded.
	I0526 21:25:14.242187  527485 command_runner.go:124] > May 26 21:23:38 multinode-20210526212238-510955 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0526 21:25:14.242204  527485 command_runner.go:124] > May 26 21:23:38 multinode-20210526212238-510955 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0526 21:25:14.242233  527485 command_runner.go:124] > May 26 21:23:38 multinode-20210526212238-510955 kubelet[2767]: Flag --runtime-request-timeout has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	I0526 21:25:14.242270  527485 command_runner.go:124] > May 26 21:23:38 multinode-20210526212238-510955 kubelet[2767]: Flag --runtime-request-timeout has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	I0526 21:25:14.242289  527485 command_runner.go:124] > May 26 21:23:38 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:38.567074    2767 server.go:416] Version: v1.20.2
	I0526 21:25:14.242311  527485 command_runner.go:124] > May 26 21:23:38 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:38.569090    2767 server.go:837] Client rotation is on, will bootstrap in background
	I0526 21:25:14.242334  527485 command_runner.go:124] > May 26 21:23:38 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:38.580189    2767 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	I0526 21:25:14.242356  527485 command_runner.go:124] > May 26 21:23:38 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:38.581836    2767 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	I0526 21:25:14.242377  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.594567    2767 server.go:645] --cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /
	I0526 21:25:14.242398  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.596007    2767 container_manager_linux.go:274] container manager verified user specified cgroup-root exists: []
	I0526 21:25:14.242461  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.596173    2767 container_manager_linux.go:279] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:remote CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalTopologyManagerScope:container ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none}
	I0526 21:25:14.242494  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.596418    2767 topology_manager.go:120] [topologymanager] Creating topology manager with none policy per container scope
	I0526 21:25:14.242515  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.596689    2767 container_manager_linux.go:310] [topologymanager] Initializing Topology Manager with none policy and container-level scope
	I0526 21:25:14.242535  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.596801    2767 container_manager_linux.go:315] Creating device plugin manager: true
	I0526 21:25:14.242555  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.597107    2767 remote_runtime.go:62] parsed scheme: ""
	I0526 21:25:14.242576  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.597233    2767 remote_runtime.go:62] scheme "" not registered, fallback to default scheme
	I0526 21:25:14.242600  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.597387    2767 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}
	I0526 21:25:14.242619  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.597579    2767 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0526 21:25:14.242642  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.597846    2767 remote_image.go:50] parsed scheme: ""
	I0526 21:25:14.242662  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.597965    2767 remote_image.go:50] scheme "" not registered, fallback to default scheme
	I0526 21:25:14.242685  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.598781    2767 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}
	I0526 21:25:14.242705  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.598958    2767 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0526 21:25:14.242726  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.599605    2767 kubelet.go:262] Adding pod path: /etc/kubernetes/manifests
	I0526 21:25:14.242744  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.599963    2767 kubelet.go:273] Watching apiserver
	I0526 21:25:14.242769  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.629159    2767 kuberuntime_manager.go:216] Container runtime containerd initialized, version: v1.4.4, apiVersion: v1alpha2
	I0526 21:25:14.242793  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: E0526 21:23:43.914429    2767 aws_credentials.go:77] while getting AWS credentials NoCredentialProviders: no valid providers in chain. Deprecated.
	I0526 21:25:14.242812  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]:         For verbose messaging see aws.Config.CredentialsChainVerboseErrors
	I0526 21:25:14.242832  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.918059    2767 server.go:1176] Started kubelet
	I0526 21:25:14.242851  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.928363    2767 server.go:148] Starting to listen on 0.0.0.0:10250
	I0526 21:25:14.242871  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.931699    2767 server.go:410] Adding debug handlers to kubelet server.
	I0526 21:25:14.242891  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.943931    2767 fs_resource_analyzer.go:64] Starting FS ResourceAnalyzer
	I0526 21:25:14.242912  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.945256    2767 volume_manager.go:271] Starting Kubelet Volume Manager
	I0526 21:25:14.242938  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: E0526 21:23:43.949736    2767 kubelet.go:2163] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	I0526 21:25:14.242958  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.949953    2767 client.go:86] parsed scheme: "unix"
	I0526 21:25:14.242978  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.950079    2767 client.go:86] scheme "unix" not registered, fallback to default scheme
	I0526 21:25:14.243003  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.950244    2767 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}
	I0526 21:25:14.243024  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.950360    2767 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0526 21:25:14.243046  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.960536    2767 desired_state_of_world_populator.go:142] Desired state populator starts to run
	I0526 21:25:14.243068  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.047200    2767 kubelet_node_status.go:71] Attempting to register node multinode-20210526212238-510955
	I0526 21:25:14.243089  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.063101    2767 kubelet_node_status.go:109] Node multinode-20210526212238-510955 was previously registered
	I0526 21:25:14.243110  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.063585    2767 kubelet_node_status.go:74] Successfully registered node multinode-20210526212238-510955
	I0526 21:25:14.243130  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.100820    2767 kubelet_network_linux.go:56] Initialized IPv4 iptables rules.
	I0526 21:25:14.243154  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.100987    2767 status_manager.go:158] Starting to sync pod status with apiserver
	I0526 21:25:14.243173  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.101019    2767 kubelet.go:1802] Starting kubelet main sync loop.
	I0526 21:25:14.243198  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: E0526 21:23:44.101062    2767 kubelet.go:1826] skipping pod synchronization - [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
	I0526 21:25:14.243214  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.167420    2767 cpu_manager.go:193] [cpumanager] starting with none policy
	I0526 21:25:14.243225  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.167823    2767 cpu_manager.go:194] [cpumanager] reconciling every 10s
	I0526 21:25:14.243244  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.167963    2767 state_mem.go:36] [cpumanager] initializing new in-memory state store
	I0526 21:25:14.243268  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.168200    2767 state_mem.go:88] [cpumanager] updated default cpuset: ""
	I0526 21:25:14.243288  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.168317    2767 state_mem.go:96] [cpumanager] updated cpuset assignments: "map[]"
	I0526 21:25:14.243307  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.168438    2767 policy_none.go:43] [cpumanager] none policy: Start
	I0526 21:25:14.243326  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: W0526 21:23:44.170589    2767 manager.go:594] Failed to retrieve checkpoint for "kubelet_internal_checkpoint": checkpoint is not found
	I0526 21:25:14.243346  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.170973    2767 plugin_manager.go:114] Starting Kubelet Plugin Manager
	I0526 21:25:14.243366  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.201167    2767 topology_manager.go:187] [topologymanager] Topology Admit Handler
	I0526 21:25:14.243386  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.201423    2767 topology_manager.go:187] [topologymanager] Topology Admit Handler
	I0526 21:25:14.243406  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.202839    2767 topology_manager.go:187] [topologymanager] Topology Admit Handler
	I0526 21:25:14.243425  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.202968    2767 topology_manager.go:187] [topologymanager] Topology Admit Handler
	I0526 21:25:14.243459  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.349811    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/474c55dfb64741cc485e46b6bb9f2dc0-kubeconfig") pod "kube-controller-manager-multinode-20210526212238-510955" (UID: "474c55dfb64741cc485e46b6bb9f2dc0")
	I0526 21:25:14.243495  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.349855    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/474c55dfb64741cc485e46b6bb9f2dc0-usr-share-ca-certificates") pod "kube-controller-manager-multinode-20210526212238-510955" (UID: "474c55dfb64741cc485e46b6bb9f2dc0")
	I0526 21:25:14.243533  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.349894    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-certs" (UniqueName: "kubernetes.io/host-path/34530b4d5ce1b17919f3b8976b2d0456-etcd-certs") pod "etcd-multinode-20210526212238-510955" (UID: "34530b4d5ce1b17919f3b8976b2d0456")
	I0526 21:25:14.243567  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.349913    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/b42b6879229f245abab6047de8662a2f-ca-certs") pod "kube-apiserver-multinode-20210526212238-510955" (UID: "b42b6879229f245abab6047de8662a2f")
	I0526 21:25:14.243609  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.349921    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "flexvolume-dir" (UniqueName: "kubernetes.io/host-path/474c55dfb64741cc485e46b6bb9f2dc0-flexvolume-dir") pod "kube-controller-manager-multinode-20210526212238-510955" (UID: "474c55dfb64741cc485e46b6bb9f2dc0")
	I0526 21:25:14.243645  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.349921    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/474c55dfb64741cc485e46b6bb9f2dc0-ca-certs") pod "kube-controller-manager-multinode-20210526212238-510955" (UID: "474c55dfb64741cc485e46b6bb9f2dc0")
	I0526 21:25:14.243681  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.349955    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/474c55dfb64741cc485e46b6bb9f2dc0-k8s-certs") pod "kube-controller-manager-multinode-20210526212238-510955" (UID: "474c55dfb64741cc485e46b6bb9f2dc0")
	I0526 21:25:14.243714  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.349955    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/6b4a0ee8b3d15a1c2e47c15d32e6eb0d-kubeconfig") pod "kube-scheduler-multinode-20210526212238-510955" (UID: "6b4a0ee8b3d15a1c2e47c15d32e6eb0d")
	I0526 21:25:14.243746  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.349988    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-data" (UniqueName: "kubernetes.io/host-path/34530b4d5ce1b17919f3b8976b2d0456-etcd-data") pod "etcd-multinode-20210526212238-510955" (UID: "34530b4d5ce1b17919f3b8976b2d0456")
	I0526 21:25:14.243777  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.350013    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/b42b6879229f245abab6047de8662a2f-k8s-certs") pod "kube-apiserver-multinode-20210526212238-510955" (UID: "b42b6879229f245abab6047de8662a2f")
	I0526 21:25:14.243814  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.350027    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/b42b6879229f245abab6047de8662a2f-usr-share-ca-certificates") pod "kube-apiserver-multinode-20210526212238-510955" (UID: "b42b6879229f245abab6047de8662a2f")
	I0526 21:25:14.243834  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.350035    2767 reconciler.go:157] Reconciler: start to sync state
	I0526 21:25:14.243861  527485 command_runner.go:124] > May 26 21:23:49 multinode-20210526212238-510955 kubelet[2767]: E0526 21:23:49.171719    2767 kubelet.go:2163] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	I0526 21:25:14.243883  527485 command_runner.go:124] > May 26 21:23:53 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:53.286184    2767 kuberuntime_manager.go:1006] updating runtime config through cri with podcidr 10.244.0.0/24
	I0526 21:25:14.243903  527485 command_runner.go:124] > May 26 21:23:53 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:53.292064    2767 kubelet_network.go:77] Setting Pod CIDR:  -> 10.244.0.0/24
	I0526 21:25:14.243930  527485 command_runner.go:124] > May 26 21:23:53 multinode-20210526212238-510955 kubelet[2767]: E0526 21:23:53.297677    2767 kubelet.go:2163] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	I0526 21:25:14.243950  527485 command_runner.go:124] > May 26 21:23:53 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:53.473000    2767 topology_manager.go:187] [topologymanager] Topology Admit Handler
	I0526 21:25:14.243984  527485 command_runner.go:124] > May 26 21:23:53 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:53.588715    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "cni-cfg" (UniqueName: "kubernetes.io/host-path/aac3ff91-8f9c-4f4e-81fc-a859f780d67d-cni-cfg") pod "kindnet-2wgbs" (UID: "aac3ff91-8f9c-4f4e-81fc-a859f780d67d")
	I0526 21:25:14.244021  527485 command_runner.go:124] > May 26 21:23:53 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:53.589055    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/aac3ff91-8f9c-4f4e-81fc-a859f780d67d-xtables-lock") pod "kindnet-2wgbs" (UID: "aac3ff91-8f9c-4f4e-81fc-a859f780d67d")
	I0526 21:25:14.244056  527485 command_runner.go:124] > May 26 21:23:53 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:53.589618    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kindnet-token-zm2kt" (UniqueName: "kubernetes.io/secret/aac3ff91-8f9c-4f4e-81fc-a859f780d67d-kindnet-token-zm2kt") pod "kindnet-2wgbs" (UID: "aac3ff91-8f9c-4f4e-81fc-a859f780d67d")
	I0526 21:25:14.244089  527485 command_runner.go:124] > May 26 21:23:53 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:53.589842    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/aac3ff91-8f9c-4f4e-81fc-a859f780d67d-lib-modules") pod "kindnet-2wgbs" (UID: "aac3ff91-8f9c-4f4e-81fc-a859f780d67d")
	I0526 21:25:14.244111  527485 command_runner.go:124] > May 26 21:23:53 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:53.611915    2767 topology_manager.go:187] [topologymanager] Topology Admit Handler
	I0526 21:25:14.244144  527485 command_runner.go:124] > May 26 21:23:53 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:53.791552    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/950a915d-c5f0-4e6f-bc12-ee97013032f0-kube-proxy") pod "kube-proxy-qbl42" (UID: "950a915d-c5f0-4e6f-bc12-ee97013032f0")
	I0526 21:25:14.244177  527485 command_runner.go:124] > May 26 21:23:53 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:53.791755    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/950a915d-c5f0-4e6f-bc12-ee97013032f0-lib-modules") pod "kube-proxy-qbl42" (UID: "950a915d-c5f0-4e6f-bc12-ee97013032f0")
	I0526 21:25:14.244210  527485 command_runner.go:124] > May 26 21:23:53 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:53.791904    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy-token-xd4p4" (UniqueName: "kubernetes.io/secret/950a915d-c5f0-4e6f-bc12-ee97013032f0-kube-proxy-token-xd4p4") pod "kube-proxy-qbl42" (UID: "950a915d-c5f0-4e6f-bc12-ee97013032f0")
	I0526 21:25:14.244242  527485 command_runner.go:124] > May 26 21:23:53 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:53.792035    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/950a915d-c5f0-4e6f-bc12-ee97013032f0-xtables-lock") pod "kube-proxy-qbl42" (UID: "950a915d-c5f0-4e6f-bc12-ee97013032f0")
	I0526 21:25:14.244274  527485 command_runner.go:124] > May 26 21:23:54 multinode-20210526212238-510955 kubelet[2767]: E0526 21:23:54.172944    2767 kubelet.go:2163] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	I0526 21:25:14.244307  527485 command_runner.go:124] > May 26 21:23:56 multinode-20210526212238-510955 kubelet[2767]: E0526 21:23:56.623072    2767 cadvisor_stats_provider.go:401] Partial failure issuing cadvisor.ContainerInfoV2: partial failures: ["/kubepods/besteffort/pod950a915d-c5f0-4e6f-bc12-ee97013032f0/de6efc6fec4b20903efb849817a9e57e284841a9fc6ef9bd1b83e563316cdaa2": RecentStats: unable to find data in memory cache]
	I0526 21:25:14.244328  527485 command_runner.go:124] > May 26 21:24:08 multinode-20210526212238-510955 kubelet[2767]: I0526 21:24:08.993599    2767 topology_manager.go:187] [topologymanager] Topology Admit Handler
	I0526 21:25:14.244349  527485 command_runner.go:124] > May 26 21:24:09 multinode-20210526212238-510955 kubelet[2767]: I0526 21:24:09.010021    2767 topology_manager.go:187] [topologymanager] Topology Admit Handler
	I0526 21:25:14.244381  527485 command_runner.go:124] > May 26 21:24:09 multinode-20210526212238-510955 kubelet[2767]: I0526 21:24:09.159693    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp" (UniqueName: "kubernetes.io/host-path/e29f2487-1ac5-4bb4-9ab8-6b5d5c43ec36-tmp") pod "storage-provisioner" (UID: "e29f2487-1ac5-4bb4-9ab8-6b5d5c43ec36")
	I0526 21:25:14.244417  527485 command_runner.go:124] > May 26 21:24:09 multinode-20210526212238-510955 kubelet[2767]: I0526 21:24:09.159808    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-7ps8h" (UniqueName: "kubernetes.io/secret/a0522c32-9960-4c21-8a5a-d0b137009166-coredns-token-7ps8h") pod "coredns-74ff55c5b-tw67b" (UID: "a0522c32-9960-4c21-8a5a-d0b137009166")
	I0526 21:25:14.244451  527485 command_runner.go:124] > May 26 21:24:09 multinode-20210526212238-510955 kubelet[2767]: I0526 21:24:09.159830    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/a0522c32-9960-4c21-8a5a-d0b137009166-config-volume") pod "coredns-74ff55c5b-tw67b" (UID: "a0522c32-9960-4c21-8a5a-d0b137009166")
	I0526 21:25:14.244485  527485 command_runner.go:124] > May 26 21:24:09 multinode-20210526212238-510955 kubelet[2767]: I0526 21:24:09.159848    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "storage-provisioner-token-hgxxq" (UniqueName: "kubernetes.io/secret/e29f2487-1ac5-4bb4-9ab8-6b5d5c43ec36-storage-provisioner-token-hgxxq") pod "storage-provisioner" (UID: "e29f2487-1ac5-4bb4-9ab8-6b5d5c43ec36")
	I0526 21:25:16.777174  527485 api_server.go:223] Checking apiserver healthz at https://192.168.39.229:8443/healthz ...
	I0526 21:25:16.786618  527485 api_server.go:249] https://192.168.39.229:8443/healthz returned 200:
	ok
	I0526 21:25:16.786691  527485 round_trippers.go:422] GET https://192.168.39.229:8443/version?timeout=32s
	I0526 21:25:16.786700  527485 round_trippers.go:429] Request Headers:
	I0526 21:25:16.786705  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:25:16.786709  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:25:16.787858  527485 round_trippers.go:448] Response Status: 200 OK in 1 milliseconds
	I0526 21:25:16.787878  527485 round_trippers.go:451] Response Headers:
	I0526 21:25:16.787883  527485 round_trippers.go:454]     Content-Length: 263
	I0526 21:25:16.787888  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:25:16 GMT
	I0526 21:25:16.787893  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:25:16.787897  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:25:16.787906  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:25:16.787913  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:25:16.787945  527485 request.go:1107] Response Body: {
	  "major": "1",
	  "minor": "20",
	  "gitVersion": "v1.20.2",
	  "gitCommit": "faecb196815e248d3ecfb03c680a4507229c2a56",
	  "gitTreeState": "clean",
	  "buildDate": "2021-01-13T13:20:00Z",
	  "goVersion": "go1.15.5",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0526 21:25:16.788058  527485 api_server.go:139] control plane version: v1.20.2
	I0526 21:25:16.788076  527485 api_server.go:129] duration metric: took 3.203268839s to wait for apiserver health ...
	I0526 21:25:16.788086  527485 system_pods.go:43] waiting for kube-system pods to appear ...
	I0526 21:25:16.788110  527485 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0526 21:25:16.788165  527485 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0526 21:25:16.807484  527485 command_runner.go:124] > a0581c0e5409bf38a71c61d4cf776a7fcd3dc38c24a5a6a52165552afd2bc85c
	I0526 21:25:16.808417  527485 cri.go:76] found id: "a0581c0e5409bf38a71c61d4cf776a7fcd3dc38c24a5a6a52165552afd2bc85c"
	I0526 21:25:16.808439  527485 cri.go:76] found id: ""
	I0526 21:25:16.808445  527485 logs.go:270] 1 containers: [a0581c0e5409bf38a71c61d4cf776a7fcd3dc38c24a5a6a52165552afd2bc85c]
	I0526 21:25:16.808484  527485 ssh_runner.go:149] Run: which crictl
	I0526 21:25:16.812497  527485 command_runner.go:124] > /bin/crictl
	I0526 21:25:16.812957  527485 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0526 21:25:16.813019  527485 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=etcd
	I0526 21:25:16.842444  527485 command_runner.go:124] > c8538106e966bebc2d596c1688fdb11c413a37e122bd18c454b4db5bec7f55ad
	I0526 21:25:16.842473  527485 cri.go:76] found id: "c8538106e966bebc2d596c1688fdb11c413a37e122bd18c454b4db5bec7f55ad"
	I0526 21:25:16.842480  527485 cri.go:76] found id: ""
	I0526 21:25:16.842485  527485 logs.go:270] 1 containers: [c8538106e966bebc2d596c1688fdb11c413a37e122bd18c454b4db5bec7f55ad]
	I0526 21:25:16.842519  527485 ssh_runner.go:149] Run: which crictl
	I0526 21:25:16.849048  527485 command_runner.go:124] > /bin/crictl
	I0526 21:25:16.849082  527485 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0526 21:25:16.849117  527485 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=coredns
	I0526 21:25:16.872050  527485 command_runner.go:124] > a9593dff4428d4d0f5cc31832cefb42bf39b62a98d32d295dd9502a1ef6d307a
	I0526 21:25:16.872680  527485 cri.go:76] found id: "a9593dff4428d4d0f5cc31832cefb42bf39b62a98d32d295dd9502a1ef6d307a"
	I0526 21:25:16.872698  527485 cri.go:76] found id: ""
	I0526 21:25:16.872705  527485 logs.go:270] 1 containers: [a9593dff4428d4d0f5cc31832cefb42bf39b62a98d32d295dd9502a1ef6d307a]
	I0526 21:25:16.872751  527485 ssh_runner.go:149] Run: which crictl
	I0526 21:25:16.876656  527485 command_runner.go:124] > /bin/crictl
	I0526 21:25:16.876834  527485 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0526 21:25:16.876895  527485 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0526 21:25:16.892544  527485 command_runner.go:124] > e6bb9bee7539aa8f41eb665706dd077c17b8c28a54090cf4fffe439978194c08
	I0526 21:25:16.893901  527485 cri.go:76] found id: "e6bb9bee7539aa8f41eb665706dd077c17b8c28a54090cf4fffe439978194c08"
	I0526 21:25:16.893914  527485 cri.go:76] found id: ""
	I0526 21:25:16.893919  527485 logs.go:270] 1 containers: [e6bb9bee7539aa8f41eb665706dd077c17b8c28a54090cf4fffe439978194c08]
	I0526 21:25:16.893946  527485 ssh_runner.go:149] Run: which crictl
	I0526 21:25:16.897825  527485 command_runner.go:124] > /bin/crictl
	I0526 21:25:16.898071  527485 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0526 21:25:16.898107  527485 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0526 21:25:16.919930  527485 command_runner.go:124] > de6efc6fec4b20903efb849817a9e57e284841a9fc6ef9bd1b83e563316cdaa2
	I0526 21:25:16.922816  527485 cri.go:76] found id: "de6efc6fec4b20903efb849817a9e57e284841a9fc6ef9bd1b83e563316cdaa2"
	I0526 21:25:16.922830  527485 cri.go:76] found id: ""
	I0526 21:25:16.922834  527485 logs.go:270] 1 containers: [de6efc6fec4b20903efb849817a9e57e284841a9fc6ef9bd1b83e563316cdaa2]
	I0526 21:25:16.922862  527485 ssh_runner.go:149] Run: which crictl
	I0526 21:25:16.927749  527485 command_runner.go:124] > /bin/crictl
	I0526 21:25:16.927776  527485 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0526 21:25:16.927805  527485 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0526 21:25:16.946206  527485 cri.go:76] found id: ""
	I0526 21:25:16.946219  527485 logs.go:270] 0 containers: []
	W0526 21:25:16.946223  527485 logs.go:272] No container was found matching "kubernetes-dashboard"
	I0526 21:25:16.946228  527485 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0526 21:25:16.946261  527485 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0526 21:25:16.966964  527485 command_runner.go:124] > 5d3df8c94eaedafbc7bb1e79012cbc72bc2ec88687f4bfc6f10fff8c1eb3107d
	I0526 21:25:16.967146  527485 cri.go:76] found id: "5d3df8c94eaedafbc7bb1e79012cbc72bc2ec88687f4bfc6f10fff8c1eb3107d"
	I0526 21:25:16.967159  527485 cri.go:76] found id: ""
	I0526 21:25:16.967163  527485 logs.go:270] 1 containers: [5d3df8c94eaedafbc7bb1e79012cbc72bc2ec88687f4bfc6f10fff8c1eb3107d]
	I0526 21:25:16.967188  527485 ssh_runner.go:149] Run: which crictl
	I0526 21:25:16.970708  527485 command_runner.go:124] > /bin/crictl
	I0526 21:25:16.970734  527485 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0526 21:25:16.970763  527485 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0526 21:25:16.988710  527485 command_runner.go:124] > 2314e41b1b44395e7b5a7c32d6859941d4a5db601171f0dd18ddefb571d25f18
	I0526 21:25:16.989620  527485 cri.go:76] found id: "2314e41b1b44395e7b5a7c32d6859941d4a5db601171f0dd18ddefb571d25f18"
	I0526 21:25:16.989633  527485 cri.go:76] found id: ""
	I0526 21:25:16.989637  527485 logs.go:270] 1 containers: [2314e41b1b44395e7b5a7c32d6859941d4a5db601171f0dd18ddefb571d25f18]
	I0526 21:25:16.989660  527485 ssh_runner.go:149] Run: which crictl
	I0526 21:25:16.994092  527485 command_runner.go:124] > /bin/crictl
	I0526 21:25:16.994233  527485 logs.go:123] Gathering logs for kube-proxy [de6efc6fec4b20903efb849817a9e57e284841a9fc6ef9bd1b83e563316cdaa2] ...
	I0526 21:25:16.994245  527485 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 de6efc6fec4b20903efb849817a9e57e284841a9fc6ef9bd1b83e563316cdaa2"
	I0526 21:25:17.011911  527485 command_runner.go:124] ! I0526 21:23:54.629702       1 node.go:172] Successfully retrieved node IP: 192.168.39.229
	I0526 21:25:17.011987  527485 command_runner.go:124] ! I0526 21:23:54.629813       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.39.229), assume IPv4 operation
	I0526 21:25:17.012013  527485 command_runner.go:124] ! W0526 21:23:54.677087       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0526 21:25:17.012024  527485 command_runner.go:124] ! I0526 21:23:54.677377       1 server_others.go:185] Using iptables Proxier.
	I0526 21:25:17.012032  527485 command_runner.go:124] ! I0526 21:23:54.678139       1 server.go:650] Version: v1.20.2
	I0526 21:25:17.012049  527485 command_runner.go:124] ! I0526 21:23:54.678560       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
	I0526 21:25:17.012065  527485 command_runner.go:124] ! I0526 21:23:54.678810       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
	I0526 21:25:17.012078  527485 command_runner.go:124] ! I0526 21:23:54.680271       1 config.go:315] Starting service config controller
	I0526 21:25:17.012093  527485 command_runner.go:124] ! I0526 21:23:54.680366       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0526 21:25:17.012106  527485 command_runner.go:124] ! I0526 21:23:54.680391       1 config.go:224] Starting endpoint slice config controller
	I0526 21:25:17.012122  527485 command_runner.go:124] ! I0526 21:23:54.680396       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0526 21:25:17.012137  527485 command_runner.go:124] ! I0526 21:23:54.780835       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0526 21:25:17.012151  527485 command_runner.go:124] ! I0526 21:23:54.780955       1 shared_informer.go:247] Caches are synced for service config 
	I0526 21:25:17.013510  527485 logs.go:123] Gathering logs for storage-provisioner [5d3df8c94eaedafbc7bb1e79012cbc72bc2ec88687f4bfc6f10fff8c1eb3107d] ...
	I0526 21:25:17.013527  527485 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 5d3df8c94eaedafbc7bb1e79012cbc72bc2ec88687f4bfc6f10fff8c1eb3107d"
	I0526 21:25:17.033906  527485 command_runner.go:124] ! I0526 21:24:10.174152       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0526 21:25:17.034005  527485 command_runner.go:124] ! I0526 21:24:10.283423       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0526 21:25:17.034391  527485 command_runner.go:124] ! I0526 21:24:10.285296       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0526 21:25:17.034643  527485 command_runner.go:124] ! I0526 21:24:10.325709       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0526 21:25:17.034740  527485 command_runner.go:124] ! I0526 21:24:10.333080       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_multinode-20210526212238-510955_640f1575-3f2b-423b-9f51-48a3198dc1b4!
	I0526 21:25:17.035233  527485 command_runner.go:124] ! I0526 21:24:10.329407       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"694e5be2-46cf-4c76-aeac-70628468e6a3", APIVersion:"v1", ResourceVersion:"496", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' multinode-20210526212238-510955_640f1575-3f2b-423b-9f51-48a3198dc1b4 became leader
	I0526 21:25:17.035495  527485 command_runner.go:124] ! I0526 21:24:10.440994       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_multinode-20210526212238-510955_640f1575-3f2b-423b-9f51-48a3198dc1b4!
	I0526 21:25:17.037922  527485 logs.go:123] Gathering logs for kubelet ...
	I0526 21:25:17.037940  527485 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0526 21:25:17.051460  527485 command_runner.go:124] > -- Logs begin at Wed 2021-05-26 21:22:49 UTC, end at Wed 2021-05-26 21:25:17 UTC. --
	I0526 21:25:17.051482  527485 command_runner.go:124] > May 26 21:23:21 multinode-20210526212238-510955 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0526 21:25:17.051506  527485 command_runner.go:124] > May 26 21:23:21 multinode-20210526212238-510955 kubelet[2343]: Flag --runtime-request-timeout has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	I0526 21:25:17.051552  527485 command_runner.go:124] > May 26 21:23:21 multinode-20210526212238-510955 kubelet[2343]: Flag --runtime-request-timeout has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	I0526 21:25:17.051570  527485 command_runner.go:124] > May 26 21:23:22 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:22.365155    2343 server.go:416] Version: v1.20.2
	I0526 21:25:17.051596  527485 command_runner.go:124] > May 26 21:23:22 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:22.365664    2343 server.go:837] Client rotation is on, will bootstrap in background
	I0526 21:25:17.051619  527485 command_runner.go:124] > May 26 21:23:22 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:22.382328    2343 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	I0526 21:25:17.051656  527485 command_runner.go:124] > May 26 21:23:22 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:22.383887    2343 certificate_manager.go:437] Failed while requesting a signed certificate from the master: cannot create certificate signing request: Post "https://control-plane.minikube.internal:8443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:17.051689  527485 command_runner.go:124] > May 26 21:23:24 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:24.586559    2343 certificate_manager.go:437] Failed while requesting a signed certificate from the master: cannot create certificate signing request: Post "https://control-plane.minikube.internal:8443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:17.051718  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.392858    2343 server.go:645] --cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /
	I0526 21:25:17.051742  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.393993    2343 container_manager_linux.go:274] container manager verified user specified cgroup-root exists: []
	I0526 21:25:17.051815  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.394298    2343 container_manager_linux.go:279] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:remote CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalTopologyManagerScope:container ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none}
	I0526 21:25:17.051842  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.395126    2343 topology_manager.go:120] [topologymanager] Creating topology manager with none policy per container scope
	I0526 21:25:17.052098  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.395334    2343 container_manager_linux.go:310] [topologymanager] Initializing Topology Manager with none policy and container-level scope
	I0526 21:25:17.052118  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.395348    2343 container_manager_linux.go:315] Creating device plugin manager: true
	I0526 21:25:17.052128  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.395816    2343 remote_runtime.go:62] parsed scheme: ""
	I0526 21:25:17.052143  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.395929    2343 remote_runtime.go:62] scheme "" not registered, fallback to default scheme
	I0526 21:25:17.052165  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.396315    2343 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}
	I0526 21:25:17.052184  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.396571    2343 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0526 21:25:17.052195  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.397666    2343 remote_image.go:50] parsed scheme: ""
	I0526 21:25:17.052209  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.397691    2343 remote_image.go:50] scheme "" not registered, fallback to default scheme
	I0526 21:25:17.052224  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.397829    2343 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}
	I0526 21:25:17.052239  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.397957    2343 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0526 21:25:17.052259  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.400786    2343 kubelet.go:262] Adding pod path: /etc/kubernetes/manifests
	I0526 21:25:17.052278  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.401761    2343 kubelet.go:273] Watching apiserver
	I0526 21:25:17.052311  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:27.419726    2343 reflector.go:138] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://control-plane.minikube.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%!D(MISSING)multinode-20210526212238-510955&limit=500&resourceVersion=0": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:17.052346  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:27.433343    2343 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:17.052370  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.434846    2343 kuberuntime_manager.go:216] Container runtime containerd initialized, version: v1.4.4, apiVersion: v1alpha2
	I0526 21:25:17.052408  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:27.435179    2343 reflector.go:138] k8s.io/kubernetes/pkg/kubelet/kubelet.go:438: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-20210526212238-510955&limit=500&resourceVersion=0": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:17.052430  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:27.695431    2343 aws_credentials.go:77] while getting AWS credentials NoCredentialProviders: no valid providers in chain. Deprecated.
	I0526 21:25:17.052443  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]:         For verbose messaging see aws.Config.CredentialsChainVerboseErrors
	I0526 21:25:17.052459  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: W0526 21:23:27.696850    2343 probe.go:268] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
	I0526 21:25:17.052480  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.698714    2343 server.go:1176] Started kubelet
	I0526 21:25:17.052500  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.699681    2343 server.go:148] Starting to listen on 0.0.0.0:10250
	I0526 21:25:17.052521  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.701131    2343 server.go:410] Adding debug handlers to kubelet server.
	I0526 21:25:17.052640  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:27.701698    2343 event.go:273] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"multinode-20210526212238-510955.1682bacd86c17a5a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"multinode-20210526212238-510955", UID:"multinode-20210526212238-510955", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.",
Source:v1.EventSource{Component:"kubelet", Host:"multinode-20210526212238-510955"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc023ccf3e9a5245a, ext:5868438524, loc:(*time.Location)(0x70d1080)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc023ccf3e9a5245a, ext:5868438524, loc:(*time.Location)(0x70d1080)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events": dial tcp 192.168.39.229:8443: connect: connection refused'(may retry after sleeping)
	I0526 21:25:17.052660  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.703923    2343 fs_resource_analyzer.go:64] Starting FS ResourceAnalyzer
	I0526 21:25:17.052677  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.707734    2343 volume_manager.go:271] Starting Kubelet Volume Manager
	I0526 21:25:17.052699  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.708096    2343 desired_state_of_world_populator.go:142] Desired state populator starts to run
	I0526 21:25:17.052736  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:27.708889    2343 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:17.052779  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:27.709701    2343 controller.go:144] failed to ensure lease exists, will retry in 200ms, error: Get "https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-20210526212238-510955?timeout=10s": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:17.052809  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:27.711040    2343 kubelet.go:2163] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	I0526 21:25:17.052824  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.711583    2343 client.go:86] parsed scheme: "unix"
	I0526 21:25:17.052846  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.711779    2343 client.go:86] scheme "unix" not registered, fallback to default scheme
	I0526 21:25:17.052887  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.712280    2343 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}
	I0526 21:25:17.052907  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.712673    2343 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0526 21:25:17.052936  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.782226    2343 cpu_manager.go:193] [cpumanager] starting with none policy
	I0526 21:25:17.052957  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.782318    2343 cpu_manager.go:194] [cpumanager] reconciling every 10s
	I0526 21:25:17.052979  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.782638    2343 state_mem.go:36] [cpumanager] initializing new in-memory state store
	I0526 21:25:17.053062  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:27.799125    2343 event.go:273] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"multinode-20210526212238-510955.1682bacd86c17a5a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"multinode-20210526212238-510955", UID:"multinode-20210526212238-510955", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.",
Source:v1.EventSource{Component:"kubelet", Host:"multinode-20210526212238-510955"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc023ccf3e9a5245a, ext:5868438524, loc:(*time.Location)(0x70d1080)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc023ccf3e9a5245a, ext:5868438524, loc:(*time.Location)(0x70d1080)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events": dial tcp 192.168.39.229:8443: connect: connection refused'(may retry after sleeping)
	I0526 21:25:17.053078  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.809183    2343 kubelet_node_status.go:71] Attempting to register node multinode-20210526212238-510955
	I0526 21:25:17.053097  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:27.810505    2343 kubelet_node_status.go:93] Unable to register node "multinode-20210526212238-510955" with API server: Post "https://control-plane.minikube.internal:8443/api/v1/nodes": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:17.053116  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:27.810636    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:17.053130  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.876097    2343 kubelet_network_linux.go:56] Initialized IPv4 iptables rules.
	I0526 21:25:17.053143  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.876127    2343 status_manager.go:158] Starting to sync pod status with apiserver
	I0526 21:25:17.053155  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.876145    2343 kubelet.go:1802] Starting kubelet main sync loop.
	I0526 21:25:17.053173  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:27.876191    2343 kubelet.go:1826] skipping pod synchronization - [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
	I0526 21:25:17.053197  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:27.877853    2343 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:17.053223  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:27.910604    2343 controller.go:144] failed to ensure lease exists, will retry in 400ms, error: Get "https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-20210526212238-510955?timeout=10s": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:17.053237  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:27.910787    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:17.053253  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:27.976408    2343 kubelet.go:1826] skipping pod synchronization - container runtime status check may not have completed yet
	I0526 21:25:17.053267  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.987845    2343 policy_none.go:43] [cpumanager] none policy: Start
	I0526 21:25:17.053298  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: W0526 21:23:28.000709    2343 manager.go:594] Failed to retrieve checkpoint for "kubelet_internal_checkpoint": checkpoint is not found
	I0526 21:25:17.053312  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:28.001042    2343 plugin_manager.go:114] Starting Kubelet Plugin Manager
	I0526 21:25:17.053331  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:28.004395    2343 eviction_manager.go:260] eviction manager: failed to get summary stats: failed to get node info: node "multinode-20210526212238-510955" not found
	I0526 21:25:17.053347  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:28.010900    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:17.053362  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:28.011906    2343 kubelet_node_status.go:71] Attempting to register node multinode-20210526212238-510955
	I0526 21:25:17.053382  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:28.012281    2343 kubelet_node_status.go:93] Unable to register node "multinode-20210526212238-510955" with API server: Post "https://control-plane.minikube.internal:8443/api/v1/nodes": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:17.053395  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:28.111839    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:17.053409  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:28.177382    2343 topology_manager.go:187] [topologymanager] Topology Admit Handler
	I0526 21:25:17.053422  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:28.180087    2343 topology_manager.go:187] [topologymanager] Topology Admit Handler
	I0526 21:25:17.053434  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:28.181373    2343 topology_manager.go:187] [topologymanager] Topology Admit Handler
	I0526 21:25:17.053448  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:28.182941    2343 topology_manager.go:187] [topologymanager] Topology Admit Handler
	I0526 21:25:17.053474  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: W0526 21:23:28.185069    2343 status_manager.go:550] Failed to get status for pod "kube-controller-manager-multinode-20210526212238-510955_kube-system(474c55dfb64741cc485e46b6bb9f2dc0)": Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20210526212238-510955": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:17.053499  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: W0526 21:23:28.185417    2343 status_manager.go:550] Failed to get status for pod "kube-scheduler-multinode-20210526212238-510955_kube-system(6b4a0ee8b3d15a1c2e47c15d32e6eb0d)": Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-20210526212238-510955": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:17.053524  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: W0526 21:23:28.201047    2343 status_manager.go:550] Failed to get status for pod "kube-apiserver-multinode-20210526212238-510955_kube-system(b42b6879229f245abab6047de8662a2f)": Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-20210526212238-510955": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:17.053549  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: W0526 21:23:28.202364    2343 status_manager.go:550] Failed to get status for pod "etcd-multinode-20210526212238-510955_kube-system(34530b4d5ce1b17919f3b8976b2d0456)": Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:17.053566  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:28.212215    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:17.053588  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:28.309602    2343 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/b42b6879229f245abab6047de8662a2f-ca-certs") pod "kube-apiserver-multinode-20210526212238-510955" (UID: "b42b6879229f245abab6047de8662a2f")
	I0526 21:25:17.053610  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:28.309839    2343 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/b42b6879229f245abab6047de8662a2f-k8s-certs") pod "kube-apiserver-multinode-20210526212238-510955" (UID: "b42b6879229f245abab6047de8662a2f")
	I0526 21:25:17.053636  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:28.310062    2343 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/b42b6879229f245abab6047de8662a2f-usr-share-ca-certificates") pod "kube-apiserver-multinode-20210526212238-510955" (UID: "b42b6879229f245abab6047de8662a2f")
	I0526 21:25:17.053659  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:28.310275    2343 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/474c55dfb64741cc485e46b6bb9f2dc0-ca-certs") pod "kube-controller-manager-multinode-20210526212238-510955" (UID: "474c55dfb64741cc485e46b6bb9f2dc0")
	I0526 21:25:17.053687  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:28.310572    2343 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/474c55dfb64741cc485e46b6bb9f2dc0-k8s-certs") pod "kube-controller-manager-multinode-20210526212238-510955" (UID: "474c55dfb64741cc485e46b6bb9f2dc0")
	I0526 21:25:17.053710  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:28.310900    2343 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/474c55dfb64741cc485e46b6bb9f2dc0-kubeconfig") pod "kube-controller-manager-multinode-20210526212238-510955" (UID: "474c55dfb64741cc485e46b6bb9f2dc0")
	I0526 21:25:17.053732  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:28.311066    2343 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-certs" (UniqueName: "kubernetes.io/host-path/34530b4d5ce1b17919f3b8976b2d0456-etcd-certs") pod "etcd-multinode-20210526212238-510955" (UID: "34530b4d5ce1b17919f3b8976b2d0456")
	I0526 21:25:17.053755  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:28.311200    2343 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "flexvolume-dir" (UniqueName: "kubernetes.io/host-path/474c55dfb64741cc485e46b6bb9f2dc0-flexvolume-dir") pod "kube-controller-manager-multinode-20210526212238-510955" (UID: "474c55dfb64741cc485e46b6bb9f2dc0")
	I0526 21:25:17.053783  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:28.311326    2343 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/474c55dfb64741cc485e46b6bb9f2dc0-usr-share-ca-certificates") pod "kube-controller-manager-multinode-20210526212238-510955" (UID: "474c55dfb64741cc485e46b6bb9f2dc0")
	I0526 21:25:17.053809  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:28.311324    2343 controller.go:144] failed to ensure lease exists, will retry in 800ms, error: Get "https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-20210526212238-510955?timeout=10s": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:17.053833  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:28.311643    2343 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/6b4a0ee8b3d15a1c2e47c15d32e6eb0d-kubeconfig") pod "kube-scheduler-multinode-20210526212238-510955" (UID: "6b4a0ee8b3d15a1c2e47c15d32e6eb0d")
	I0526 21:25:17.053855  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:28.311955    2343 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-data" (UniqueName: "kubernetes.io/host-path/34530b4d5ce1b17919f3b8976b2d0456-etcd-data") pod "etcd-multinode-20210526212238-510955" (UID: "34530b4d5ce1b17919f3b8976b2d0456")
	I0526 21:25:17.053869  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:28.312763    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:17.053894  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:28.318006    2343 reflector.go:138] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://control-plane.minikube.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%!D(MISSING)multinode-20210526212238-510955&limit=500&resourceVersion=0": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:17.053919  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:28.361617    2343 reflector.go:138] k8s.io/kubernetes/pkg/kubelet/kubelet.go:438: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-20210526212238-510955&limit=500&resourceVersion=0": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:17.053932  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:28.412938    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:17.053946  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:28.414299    2343 kubelet_node_status.go:71] Attempting to register node multinode-20210526212238-510955
	I0526 21:25:17.053967  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:28.420140    2343 kubelet_node_status.go:93] Unable to register node "multinode-20210526212238-510955" with API server: Post "https://control-plane.minikube.internal:8443/api/v1/nodes": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:17.053982  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:28.513925    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:17.053995  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:28.614235    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:17.054019  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:28.620010    2343 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:17.054032  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:28.714407    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:17.054056  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:28.717664    2343 certificate_manager.go:437] Failed while requesting a signed certificate from the master: cannot create certificate signing request: Post "https://control-plane.minikube.internal:8443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:17.054069  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:28.815037    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:17.054092  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:28.819848    2343 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:17.054105  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:28.915364    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:17.054119  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:29.015843    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:17.054143  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:29.112804    2343 controller.go:144] failed to ensure lease exists, will retry in 1.6s, error: Get "https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-20210526212238-510955?timeout=10s": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:17.054157  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:29.116234    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:17.054170  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:29.217167    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:17.054195  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:29.219890    2343 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:17.054209  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:29.223096    2343 kubelet_node_status.go:71] Attempting to register node multinode-20210526212238-510955
	I0526 21:25:17.054221  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:29.317528    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:17.054235  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:29.418231    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:17.054255  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:29.419707    2343 kubelet_node_status.go:93] Unable to register node "multinode-20210526212238-510955" with API server: Post "https://control-plane.minikube.internal:8443/api/v1/nodes": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:17.054272  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:29.520018    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:17.054287  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:29.620736    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:17.054300  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:29.721115    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:17.054312  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:29.821411    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:17.054325  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:29.921772    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:17.054338  527485 command_runner.go:124] > May 26 21:23:30 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:30.022147    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:17.054352  527485 command_runner.go:124] > May 26 21:23:30 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:30.122970    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:17.054365  527485 command_runner.go:124] > May 26 21:23:30 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:30.223407    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:17.054378  527485 command_runner.go:124] > May 26 21:23:30 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:30.323609    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:17.054391  527485 command_runner.go:124] > May 26 21:23:30 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:30.424033    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:17.054403  527485 command_runner.go:124] > May 26 21:23:30 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:30.524613    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:17.054416  527485 command_runner.go:124] > May 26 21:23:30 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:30.625186    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:17.054429  527485 command_runner.go:124] > May 26 21:23:30 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:30.725563    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:17.054445  527485 command_runner.go:124] > May 26 21:23:30 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:30.826076    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:17.054458  527485 command_runner.go:124] > May 26 21:23:30 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:30.932677    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:17.054472  527485 command_runner.go:124] > May 26 21:23:31 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:31.021296    2343 kubelet_node_status.go:71] Attempting to register node multinode-20210526212238-510955
	I0526 21:25:17.054525  527485 command_runner.go:124] > May 26 21:23:31 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:31.033185    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:17.054539  527485 command_runner.go:124] > May 26 21:23:31 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:31.133540    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:17.054554  527485 command_runner.go:124] > May 26 21:23:31 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:31.234158    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:17.054567  527485 command_runner.go:124] > May 26 21:23:31 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:31.334934    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:17.054581  527485 command_runner.go:124] > May 26 21:23:31 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:31.435265    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:17.054592  527485 command_runner.go:124] > May 26 21:23:31 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:31.535646    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:17.054605  527485 command_runner.go:124] > May 26 21:23:31 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:31.636091    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:17.054618  527485 command_runner.go:124] > May 26 21:23:31 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:31.736769    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:17.054632  527485 command_runner.go:124] > May 26 21:23:31 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:31.837337    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:17.054644  527485 command_runner.go:124] > May 26 21:23:31 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:31.937851    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:17.054658  527485 command_runner.go:124] > May 26 21:23:32 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:32.038171    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:17.054670  527485 command_runner.go:124] > May 26 21:23:32 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:32.138719    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:17.054683  527485 command_runner.go:124] > May 26 21:23:32 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:32.239058    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:17.054696  527485 command_runner.go:124] > May 26 21:23:32 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:32.339598    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:17.054711  527485 command_runner.go:124] > May 26 21:23:32 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:32.440290    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:17.054724  527485 command_runner.go:124] > May 26 21:23:32 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:32.540624    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:17.054737  527485 command_runner.go:124] > May 26 21:23:32 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:32.641006    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:17.054750  527485 command_runner.go:124] > May 26 21:23:32 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:32.741403    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:17.054767  527485 command_runner.go:124] > May 26 21:23:32 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:32.841966    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:17.054781  527485 command_runner.go:124] > May 26 21:23:32 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:32.942585    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:17.054797  527485 command_runner.go:124] > May 26 21:23:33 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:33.002095    2343 kubelet.go:2163] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	I0526 21:25:17.054810  527485 command_runner.go:124] > May 26 21:23:33 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:33.042747    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:17.054825  527485 command_runner.go:124] > May 26 21:23:33 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:33.142869    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:17.054839  527485 command_runner.go:124] > May 26 21:23:33 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:33.243254    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:17.054852  527485 command_runner.go:124] > May 26 21:23:33 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:33.343706    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:17.054867  527485 command_runner.go:124] > May 26 21:23:33 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:33.444105    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:17.054880  527485 command_runner.go:124] > May 26 21:23:33 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:33.545421    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:17.054893  527485 command_runner.go:124] > May 26 21:23:33 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:33.645867    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:17.054906  527485 command_runner.go:124] > May 26 21:23:33 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:33.746343    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:17.054919  527485 command_runner.go:124] > May 26 21:23:33 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:33.846868    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:17.054932  527485 command_runner.go:124] > May 26 21:23:33 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:33.947104    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:17.054946  527485 command_runner.go:124] > May 26 21:23:34 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:34.047842    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:17.054959  527485 command_runner.go:124] > May 26 21:23:34 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:34.148334    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:17.054971  527485 command_runner.go:124] > May 26 21:23:34 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:34.248550    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:17.054984  527485 command_runner.go:124] > May 26 21:23:34 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:34.349232    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:17.054997  527485 command_runner.go:124] > May 26 21:23:34 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:34.449632    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:17.055009  527485 command_runner.go:124] > May 26 21:23:34 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:34.549987    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:17.055024  527485 command_runner.go:124] > May 26 21:23:34 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:34.650314    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:17.055038  527485 command_runner.go:124] > May 26 21:23:34 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:34.751182    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:17.055051  527485 command_runner.go:124] > May 26 21:23:34 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:34.832693    2343 reconciler.go:157] Reconciler: start to sync state
	I0526 21:25:17.055068  527485 command_runner.go:124] > May 26 21:23:34 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:34.841269    2343 nodelease.go:49] failed to get node "multinode-20210526212238-510955" when trying to set owner ref to the node lease: nodes "multinode-20210526212238-510955" not found
	I0526 21:25:17.055082  527485 command_runner.go:124] > May 26 21:23:34 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:34.851652    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:17.055098  527485 command_runner.go:124] > May 26 21:23:34 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:34.952325    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:17.055112  527485 command_runner.go:124] > May 26 21:23:35 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:35.015600    2343 kubelet_node_status.go:74] Successfully registered node multinode-20210526212238-510955
	I0526 21:25:17.055129  527485 command_runner.go:124] > May 26 21:23:38 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:38.003372    2343 kubelet.go:2163] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	I0526 21:25:17.055146  527485 command_runner.go:124] > May 26 21:23:38 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:38.252332    2343 dynamic_cafile_content.go:182] Shutting down client-ca-bundle::/var/lib/minikube/certs/ca.crt
	I0526 21:25:17.055160  527485 command_runner.go:124] > May 26 21:23:38 multinode-20210526212238-510955 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	I0526 21:25:17.055169  527485 command_runner.go:124] > May 26 21:23:38 multinode-20210526212238-510955 systemd[1]: kubelet.service: Succeeded.
	I0526 21:25:17.055180  527485 command_runner.go:124] > May 26 21:23:38 multinode-20210526212238-510955 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0526 21:25:17.055191  527485 command_runner.go:124] > May 26 21:23:38 multinode-20210526212238-510955 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0526 21:25:17.055210  527485 command_runner.go:124] > May 26 21:23:38 multinode-20210526212238-510955 kubelet[2767]: Flag --runtime-request-timeout has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	I0526 21:25:17.055230  527485 command_runner.go:124] > May 26 21:23:38 multinode-20210526212238-510955 kubelet[2767]: Flag --runtime-request-timeout has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	I0526 21:25:17.055242  527485 command_runner.go:124] > May 26 21:23:38 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:38.567074    2767 server.go:416] Version: v1.20.2
	I0526 21:25:17.055257  527485 command_runner.go:124] > May 26 21:23:38 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:38.569090    2767 server.go:837] Client rotation is on, will bootstrap in background
	I0526 21:25:17.055274  527485 command_runner.go:124] > May 26 21:23:38 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:38.580189    2767 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	I0526 21:25:17.055289  527485 command_runner.go:124] > May 26 21:23:38 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:38.581836    2767 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	I0526 21:25:17.055303  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.594567    2767 server.go:645] --cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /
	I0526 21:25:17.055318  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.596007    2767 container_manager_linux.go:274] container manager verified user specified cgroup-root exists: []
	I0526 21:25:17.055360  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.596173    2767 container_manager_linux.go:279] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:remote CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalTopologyManagerScope:container ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none}
	I0526 21:25:17.055375  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.596418    2767 topology_manager.go:120] [topologymanager] Creating topology manager with none policy per container scope
	I0526 21:25:17.055391  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.596689    2767 container_manager_linux.go:310] [topologymanager] Initializing Topology Manager with none policy and container-level scope
	I0526 21:25:17.055405  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.596801    2767 container_manager_linux.go:315] Creating device plugin manager: true
	I0526 21:25:17.055419  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.597107    2767 remote_runtime.go:62] parsed scheme: ""
	I0526 21:25:17.055431  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.597233    2767 remote_runtime.go:62] scheme "" not registered, fallback to default scheme
	I0526 21:25:17.055447  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.597387    2767 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}
	I0526 21:25:17.055459  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.597579    2767 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0526 21:25:17.055473  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.597846    2767 remote_image.go:50] parsed scheme: ""
	I0526 21:25:17.055487  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.597965    2767 remote_image.go:50] scheme "" not registered, fallback to default scheme
	I0526 21:25:17.055504  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.598781    2767 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}
	I0526 21:25:17.055518  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.598958    2767 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0526 21:25:17.055529  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.599605    2767 kubelet.go:262] Adding pod path: /etc/kubernetes/manifests
	I0526 21:25:17.055541  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.599963    2767 kubelet.go:273] Watching apiserver
	I0526 21:25:17.055555  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.629159    2767 kuberuntime_manager.go:216] Container runtime containerd initialized, version: v1.4.4, apiVersion: v1alpha2
	I0526 21:25:17.055572  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: E0526 21:23:43.914429    2767 aws_credentials.go:77] while getting AWS credentials NoCredentialProviders: no valid providers in chain. Deprecated.
	I0526 21:25:17.055586  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]:         For verbose messaging see aws.Config.CredentialsChainVerboseErrors
	I0526 21:25:17.055598  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.918059    2767 server.go:1176] Started kubelet
	I0526 21:25:17.055610  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.928363    2767 server.go:148] Starting to listen on 0.0.0.0:10250
	I0526 21:25:17.055620  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.931699    2767 server.go:410] Adding debug handlers to kubelet server.
	I0526 21:25:17.055633  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.943931    2767 fs_resource_analyzer.go:64] Starting FS ResourceAnalyzer
	I0526 21:25:17.055645  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.945256    2767 volume_manager.go:271] Starting Kubelet Volume Manager
	I0526 21:25:17.055663  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: E0526 21:23:43.949736    2767 kubelet.go:2163] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	I0526 21:25:17.055675  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.949953    2767 client.go:86] parsed scheme: "unix"
	I0526 21:25:17.055688  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.950079    2767 client.go:86] scheme "unix" not registered, fallback to default scheme
	I0526 21:25:17.055704  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.950244    2767 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}
	I0526 21:25:17.055718  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.950360    2767 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0526 21:25:17.055730  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.960536    2767 desired_state_of_world_populator.go:142] Desired state populator starts to run
	I0526 21:25:17.055744  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.047200    2767 kubelet_node_status.go:71] Attempting to register node multinode-20210526212238-510955
	I0526 21:25:17.055758  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.063101    2767 kubelet_node_status.go:109] Node multinode-20210526212238-510955 was previously registered
	I0526 21:25:17.055776  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.063585    2767 kubelet_node_status.go:74] Successfully registered node multinode-20210526212238-510955
	I0526 21:25:17.055790  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.100820    2767 kubelet_network_linux.go:56] Initialized IPv4 iptables rules.
	I0526 21:25:17.055804  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.100987    2767 status_manager.go:158] Starting to sync pod status with apiserver
	I0526 21:25:17.055818  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.101019    2767 kubelet.go:1802] Starting kubelet main sync loop.
	I0526 21:25:17.055838  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: E0526 21:23:44.101062    2767 kubelet.go:1826] skipping pod synchronization - [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
	I0526 21:25:17.055852  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.167420    2767 cpu_manager.go:193] [cpumanager] starting with none policy
	I0526 21:25:17.055865  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.167823    2767 cpu_manager.go:194] [cpumanager] reconciling every 10s
	I0526 21:25:17.055878  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.167963    2767 state_mem.go:36] [cpumanager] initializing new in-memory state store
	I0526 21:25:17.055890  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.168200    2767 state_mem.go:88] [cpumanager] updated default cpuset: ""
	I0526 21:25:17.055903  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.168317    2767 state_mem.go:96] [cpumanager] updated cpuset assignments: "map[]"
	I0526 21:25:17.055915  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.168438    2767 policy_none.go:43] [cpumanager] none policy: Start
	I0526 21:25:17.055930  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: W0526 21:23:44.170589    2767 manager.go:594] Failed to retrieve checkpoint for "kubelet_internal_checkpoint": checkpoint is not found
	I0526 21:25:17.055942  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.170973    2767 plugin_manager.go:114] Starting Kubelet Plugin Manager
	I0526 21:25:17.055956  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.201167    2767 topology_manager.go:187] [topologymanager] Topology Admit Handler
	I0526 21:25:17.055969  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.201423    2767 topology_manager.go:187] [topologymanager] Topology Admit Handler
	I0526 21:25:17.055982  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.202839    2767 topology_manager.go:187] [topologymanager] Topology Admit Handler
	I0526 21:25:17.055995  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.202968    2767 topology_manager.go:187] [topologymanager] Topology Admit Handler
	I0526 21:25:17.056017  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.349811    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/474c55dfb64741cc485e46b6bb9f2dc0-kubeconfig") pod "kube-controller-manager-multinode-20210526212238-510955" (UID: "474c55dfb64741cc485e46b6bb9f2dc0")
	I0526 21:25:17.056046  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.349855    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/474c55dfb64741cc485e46b6bb9f2dc0-usr-share-ca-certificates") pod "kube-controller-manager-multinode-20210526212238-510955" (UID: "474c55dfb64741cc485e46b6bb9f2dc0")
	I0526 21:25:17.056070  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.349894    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-certs" (UniqueName: "kubernetes.io/host-path/34530b4d5ce1b17919f3b8976b2d0456-etcd-certs") pod "etcd-multinode-20210526212238-510955" (UID: "34530b4d5ce1b17919f3b8976b2d0456")
	I0526 21:25:17.056093  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.349913    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/b42b6879229f245abab6047de8662a2f-ca-certs") pod "kube-apiserver-multinode-20210526212238-510955" (UID: "b42b6879229f245abab6047de8662a2f")
	I0526 21:25:17.056118  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.349921    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "flexvolume-dir" (UniqueName: "kubernetes.io/host-path/474c55dfb64741cc485e46b6bb9f2dc0-flexvolume-dir") pod "kube-controller-manager-multinode-20210526212238-510955" (UID: "474c55dfb64741cc485e46b6bb9f2dc0")
	I0526 21:25:17.056142  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.349921    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/474c55dfb64741cc485e46b6bb9f2dc0-ca-certs") pod "kube-controller-manager-multinode-20210526212238-510955" (UID: "474c55dfb64741cc485e46b6bb9f2dc0")
	I0526 21:25:17.056166  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.349955    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/474c55dfb64741cc485e46b6bb9f2dc0-k8s-certs") pod "kube-controller-manager-multinode-20210526212238-510955" (UID: "474c55dfb64741cc485e46b6bb9f2dc0")
	I0526 21:25:17.056189  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.349955    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/6b4a0ee8b3d15a1c2e47c15d32e6eb0d-kubeconfig") pod "kube-scheduler-multinode-20210526212238-510955" (UID: "6b4a0ee8b3d15a1c2e47c15d32e6eb0d")
	I0526 21:25:17.056212  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.349988    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-data" (UniqueName: "kubernetes.io/host-path/34530b4d5ce1b17919f3b8976b2d0456-etcd-data") pod "etcd-multinode-20210526212238-510955" (UID: "34530b4d5ce1b17919f3b8976b2d0456")
	I0526 21:25:17.056235  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.350013    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/b42b6879229f245abab6047de8662a2f-k8s-certs") pod "kube-apiserver-multinode-20210526212238-510955" (UID: "b42b6879229f245abab6047de8662a2f")
	I0526 21:25:17.056259  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.350027    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/b42b6879229f245abab6047de8662a2f-usr-share-ca-certificates") pod "kube-apiserver-multinode-20210526212238-510955" (UID: "b42b6879229f245abab6047de8662a2f")
	I0526 21:25:17.056275  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.350035    2767 reconciler.go:157] Reconciler: start to sync state
	I0526 21:25:17.056293  527485 command_runner.go:124] > May 26 21:23:49 multinode-20210526212238-510955 kubelet[2767]: E0526 21:23:49.171719    2767 kubelet.go:2163] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	I0526 21:25:17.056308  527485 command_runner.go:124] > May 26 21:23:53 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:53.286184    2767 kuberuntime_manager.go:1006] updating runtime config through cri with podcidr 10.244.0.0/24
	I0526 21:25:17.056322  527485 command_runner.go:124] > May 26 21:23:53 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:53.292064    2767 kubelet_network.go:77] Setting Pod CIDR:  -> 10.244.0.0/24
	I0526 21:25:17.056340  527485 command_runner.go:124] > May 26 21:23:53 multinode-20210526212238-510955 kubelet[2767]: E0526 21:23:53.297677    2767 kubelet.go:2163] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	I0526 21:25:17.056355  527485 command_runner.go:124] > May 26 21:23:53 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:53.473000    2767 topology_manager.go:187] [topologymanager] Topology Admit Handler
	I0526 21:25:17.056378  527485 command_runner.go:124] > May 26 21:23:53 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:53.588715    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "cni-cfg" (UniqueName: "kubernetes.io/host-path/aac3ff91-8f9c-4f4e-81fc-a859f780d67d-cni-cfg") pod "kindnet-2wgbs" (UID: "aac3ff91-8f9c-4f4e-81fc-a859f780d67d")
	I0526 21:25:17.056402  527485 command_runner.go:124] > May 26 21:23:53 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:53.589055    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/aac3ff91-8f9c-4f4e-81fc-a859f780d67d-xtables-lock") pod "kindnet-2wgbs" (UID: "aac3ff91-8f9c-4f4e-81fc-a859f780d67d")
	I0526 21:25:17.056428  527485 command_runner.go:124] > May 26 21:23:53 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:53.589618    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kindnet-token-zm2kt" (UniqueName: "kubernetes.io/secret/aac3ff91-8f9c-4f4e-81fc-a859f780d67d-kindnet-token-zm2kt") pod "kindnet-2wgbs" (UID: "aac3ff91-8f9c-4f4e-81fc-a859f780d67d")
	I0526 21:25:17.056449  527485 command_runner.go:124] > May 26 21:23:53 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:53.589842    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/aac3ff91-8f9c-4f4e-81fc-a859f780d67d-lib-modules") pod "kindnet-2wgbs" (UID: "aac3ff91-8f9c-4f4e-81fc-a859f780d67d")
	I0526 21:25:17.056464  527485 command_runner.go:124] > May 26 21:23:53 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:53.611915    2767 topology_manager.go:187] [topologymanager] Topology Admit Handler
	I0526 21:25:17.056486  527485 command_runner.go:124] > May 26 21:23:53 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:53.791552    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/950a915d-c5f0-4e6f-bc12-ee97013032f0-kube-proxy") pod "kube-proxy-qbl42" (UID: "950a915d-c5f0-4e6f-bc12-ee97013032f0")
	I0526 21:25:17.056511  527485 command_runner.go:124] > May 26 21:23:53 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:53.791755    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/950a915d-c5f0-4e6f-bc12-ee97013032f0-lib-modules") pod "kube-proxy-qbl42" (UID: "950a915d-c5f0-4e6f-bc12-ee97013032f0")
	I0526 21:25:17.056534  527485 command_runner.go:124] > May 26 21:23:53 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:53.791904    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy-token-xd4p4" (UniqueName: "kubernetes.io/secret/950a915d-c5f0-4e6f-bc12-ee97013032f0-kube-proxy-token-xd4p4") pod "kube-proxy-qbl42" (UID: "950a915d-c5f0-4e6f-bc12-ee97013032f0")
	I0526 21:25:17.056556  527485 command_runner.go:124] > May 26 21:23:53 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:53.792035    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/950a915d-c5f0-4e6f-bc12-ee97013032f0-xtables-lock") pod "kube-proxy-qbl42" (UID: "950a915d-c5f0-4e6f-bc12-ee97013032f0")
	I0526 21:25:17.056577  527485 command_runner.go:124] > May 26 21:23:54 multinode-20210526212238-510955 kubelet[2767]: E0526 21:23:54.172944    2767 kubelet.go:2163] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	I0526 21:25:17.056600  527485 command_runner.go:124] > May 26 21:23:56 multinode-20210526212238-510955 kubelet[2767]: E0526 21:23:56.623072    2767 cadvisor_stats_provider.go:401] Partial failure issuing cadvisor.ContainerInfoV2: partial failures: ["/kubepods/besteffort/pod950a915d-c5f0-4e6f-bc12-ee97013032f0/de6efc6fec4b20903efb849817a9e57e284841a9fc6ef9bd1b83e563316cdaa2": RecentStats: unable to find data in memory cache]
	I0526 21:25:17.056613  527485 command_runner.go:124] > May 26 21:24:08 multinode-20210526212238-510955 kubelet[2767]: I0526 21:24:08.993599    2767 topology_manager.go:187] [topologymanager] Topology Admit Handler
	I0526 21:25:17.056627  527485 command_runner.go:124] > May 26 21:24:09 multinode-20210526212238-510955 kubelet[2767]: I0526 21:24:09.010021    2767 topology_manager.go:187] [topologymanager] Topology Admit Handler
	I0526 21:25:17.056648  527485 command_runner.go:124] > May 26 21:24:09 multinode-20210526212238-510955 kubelet[2767]: I0526 21:24:09.159693    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp" (UniqueName: "kubernetes.io/host-path/e29f2487-1ac5-4bb4-9ab8-6b5d5c43ec36-tmp") pod "storage-provisioner" (UID: "e29f2487-1ac5-4bb4-9ab8-6b5d5c43ec36")
	I0526 21:25:17.056671  527485 command_runner.go:124] > May 26 21:24:09 multinode-20210526212238-510955 kubelet[2767]: I0526 21:24:09.159808    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-7ps8h" (UniqueName: "kubernetes.io/secret/a0522c32-9960-4c21-8a5a-d0b137009166-coredns-token-7ps8h") pod "coredns-74ff55c5b-tw67b" (UID: "a0522c32-9960-4c21-8a5a-d0b137009166")
	I0526 21:25:17.056693  527485 command_runner.go:124] > May 26 21:24:09 multinode-20210526212238-510955 kubelet[2767]: I0526 21:24:09.159830    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/a0522c32-9960-4c21-8a5a-d0b137009166-config-volume") pod "coredns-74ff55c5b-tw67b" (UID: "a0522c32-9960-4c21-8a5a-d0b137009166")
	I0526 21:25:17.056716  527485 command_runner.go:124] > May 26 21:24:09 multinode-20210526212238-510955 kubelet[2767]: I0526 21:24:09.159848    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "storage-provisioner-token-hgxxq" (UniqueName: "kubernetes.io/secret/e29f2487-1ac5-4bb4-9ab8-6b5d5c43ec36-storage-provisioner-token-hgxxq") pod "storage-provisioner" (UID: "e29f2487-1ac5-4bb4-9ab8-6b5d5c43ec36")
	I0526 21:25:17.085352  527485 logs.go:123] Gathering logs for dmesg ...
	I0526 21:25:17.085369  527485 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0526 21:25:17.094718  527485 command_runner.go:124] > [May26 21:22] You have booted with nomodeset. This means your GPU drivers are DISABLED
	I0526 21:25:17.094745  527485 command_runner.go:124] > [  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	I0526 21:25:17.094759  527485 command_runner.go:124] > [  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	I0526 21:25:17.094781  527485 command_runner.go:124] > [  +0.092301] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	I0526 21:25:17.094792  527485 command_runner.go:124] > [  +3.726361] Unstable clock detected, switching default tracing clock to "global"
	I0526 21:25:17.094798  527485 command_runner.go:124] >               If you want to keep using the local clock, then add:
	I0526 21:25:17.094804  527485 command_runner.go:124] >                 "trace_clock=local"
	I0526 21:25:17.094809  527485 command_runner.go:124] >               on the kernel command line
	I0526 21:25:17.094818  527485 command_runner.go:124] > [  +0.000018] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	I0526 21:25:17.094831  527485 command_runner.go:124] > [  +3.393840] systemd-fstab-generator[1161]: Ignoring "noauto" for root device
	I0526 21:25:17.094850  527485 command_runner.go:124] > [  +0.034647] systemd[1]: system-getty.slice: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	I0526 21:25:17.094870  527485 command_runner.go:124] > [  +0.000003] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	I0526 21:25:17.094888  527485 command_runner.go:124] > [  +0.775022] SELinux: unrecognized netlink message: protocol=0 nlmsg_type=106 sclass=netlink_route_socket pid=1723 comm=systemd-network
	I0526 21:25:17.094897  527485 command_runner.go:124] > [  +1.684954] vboxguest: loading out-of-tree module taints kernel.
	I0526 21:25:17.094904  527485 command_runner.go:124] > [  +0.006011] vboxguest: PCI device not found, probably running on physical hardware.
	I0526 21:25:17.094921  527485 command_runner.go:124] > [  +1.532510] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack.
	I0526 21:25:17.094935  527485 command_runner.go:124] > [May26 21:23] systemd-fstab-generator[2097]: Ignoring "noauto" for root device
	I0526 21:25:17.094948  527485 command_runner.go:124] > [  +0.282151] systemd-fstab-generator[2145]: Ignoring "noauto" for root device
	I0526 21:25:17.094961  527485 command_runner.go:124] > [  +9.202259] systemd-fstab-generator[2335]: Ignoring "noauto" for root device
	I0526 21:25:17.094975  527485 command_runner.go:124] > [ +16.373129] systemd-fstab-generator[2754]: Ignoring "noauto" for root device
	I0526 21:25:17.094986  527485 command_runner.go:124] > [ +16.598445] kauditd_printk_skb: 38 callbacks suppressed
	I0526 21:25:17.094994  527485 command_runner.go:124] > [May26 21:24] kauditd_printk_skb: 50 callbacks suppressed
	I0526 21:25:17.095006  527485 command_runner.go:124] > [ +45.152218] NFSD: Unable to end grace period: -110
	I0526 21:25:17.096269  527485 logs.go:123] Gathering logs for describe nodes ...
	I0526 21:25:17.096283  527485 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0526 21:25:17.234063  527485 command_runner.go:124] > Name:               multinode-20210526212238-510955
	I0526 21:25:17.234090  527485 command_runner.go:124] > Roles:              control-plane,master
	I0526 21:25:17.234100  527485 command_runner.go:124] > Labels:             beta.kubernetes.io/arch=amd64
	I0526 21:25:17.234108  527485 command_runner.go:124] >                     beta.kubernetes.io/os=linux
	I0526 21:25:17.234116  527485 command_runner.go:124] >                     kubernetes.io/arch=amd64
	I0526 21:25:17.234126  527485 command_runner.go:124] >                     kubernetes.io/hostname=multinode-20210526212238-510955
	I0526 21:25:17.234139  527485 command_runner.go:124] >                     kubernetes.io/os=linux
	I0526 21:25:17.234146  527485 command_runner.go:124] >                     minikube.k8s.io/commit=1440f8d7119ca73787e7dc88324b0d13449454ff
	I0526 21:25:17.234153  527485 command_runner.go:124] >                     minikube.k8s.io/name=multinode-20210526212238-510955
	I0526 21:25:17.234163  527485 command_runner.go:124] >                     minikube.k8s.io/updated_at=2021_05_26T21_23_38_0700
	I0526 21:25:17.234169  527485 command_runner.go:124] >                     minikube.k8s.io/version=v1.20.0
	I0526 21:25:17.234176  527485 command_runner.go:124] >                     node-role.kubernetes.io/control-plane=
	I0526 21:25:17.234182  527485 command_runner.go:124] >                     node-role.kubernetes.io/master=
	I0526 21:25:17.234190  527485 command_runner.go:124] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	I0526 21:25:17.234196  527485 command_runner.go:124] >                     node.alpha.kubernetes.io/ttl: 0
	I0526 21:25:17.234203  527485 command_runner.go:124] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0526 21:25:17.234210  527485 command_runner.go:124] > CreationTimestamp:  Wed, 26 May 2021 21:23:34 +0000
	I0526 21:25:17.234216  527485 command_runner.go:124] > Taints:             <none>
	I0526 21:25:17.234223  527485 command_runner.go:124] > Unschedulable:      false
	I0526 21:25:17.234226  527485 command_runner.go:124] > Lease:
	I0526 21:25:17.234231  527485 command_runner.go:124] >   HolderIdentity:  multinode-20210526212238-510955
	I0526 21:25:17.234237  527485 command_runner.go:124] >   AcquireTime:     <unset>
	I0526 21:25:17.234243  527485 command_runner.go:124] >   RenewTime:       Wed, 26 May 2021 21:25:14 +0000
	I0526 21:25:17.234250  527485 command_runner.go:124] > Conditions:
	I0526 21:25:17.234260  527485 command_runner.go:124] >   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	I0526 21:25:17.234272  527485 command_runner.go:124] >   ----             ------  -----------------                 ------------------                ------                       -------
	I0526 21:25:17.234286  527485 command_runner.go:124] >   MemoryPressure   False   Wed, 26 May 2021 21:24:14 +0000   Wed, 26 May 2021 21:23:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	I0526 21:25:17.234308  527485 command_runner.go:124] >   DiskPressure     False   Wed, 26 May 2021 21:24:14 +0000   Wed, 26 May 2021 21:23:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	I0526 21:25:17.234330  527485 command_runner.go:124] >   PIDPressure      False   Wed, 26 May 2021 21:24:14 +0000   Wed, 26 May 2021 21:23:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	I0526 21:25:17.234359  527485 command_runner.go:124] >   Ready            True    Wed, 26 May 2021 21:24:14 +0000   Wed, 26 May 2021 21:24:04 +0000   KubeletReady                 kubelet is posting ready status
	I0526 21:25:17.234370  527485 command_runner.go:124] > Addresses:
	I0526 21:25:17.234374  527485 command_runner.go:124] >   InternalIP:  192.168.39.229
	I0526 21:25:17.234379  527485 command_runner.go:124] >   Hostname:    multinode-20210526212238-510955
	I0526 21:25:17.234382  527485 command_runner.go:124] > Capacity:
	I0526 21:25:17.234387  527485 command_runner.go:124] >   cpu:                2
	I0526 21:25:17.234392  527485 command_runner.go:124] >   ephemeral-storage:  17784752Ki
	I0526 21:25:17.234398  527485 command_runner.go:124] >   hugepages-2Mi:      0
	I0526 21:25:17.234403  527485 command_runner.go:124] >   memory:             2186320Ki
	I0526 21:25:17.234411  527485 command_runner.go:124] >   pods:               110
	I0526 21:25:17.234418  527485 command_runner.go:124] > Allocatable:
	I0526 21:25:17.234424  527485 command_runner.go:124] >   cpu:                2
	I0526 21:25:17.234432  527485 command_runner.go:124] >   ephemeral-storage:  17784752Ki
	I0526 21:25:17.234439  527485 command_runner.go:124] >   hugepages-2Mi:      0
	I0526 21:25:17.234448  527485 command_runner.go:124] >   memory:             2186320Ki
	I0526 21:25:17.234453  527485 command_runner.go:124] >   pods:               110
	I0526 21:25:17.234459  527485 command_runner.go:124] > System Info:
	I0526 21:25:17.234464  527485 command_runner.go:124] >   Machine ID:                 fbd77f9e2b0d4ce7860fb21881bb7ff3
	I0526 21:25:17.234470  527485 command_runner.go:124] >   System UUID:                fbd77f9e-2b0d-4ce7-860f-b21881bb7ff3
	I0526 21:25:17.234477  527485 command_runner.go:124] >   Boot ID:                    9a60591c-de07-4474-bb32-101b0a9643ff
	I0526 21:25:17.234482  527485 command_runner.go:124] >   Kernel Version:             4.19.182
	I0526 21:25:17.234488  527485 command_runner.go:124] >   OS Image:                   Buildroot 2020.02.12
	I0526 21:25:17.234494  527485 command_runner.go:124] >   Operating System:           linux
	I0526 21:25:17.234502  527485 command_runner.go:124] >   Architecture:               amd64
	I0526 21:25:17.234512  527485 command_runner.go:124] >   Container Runtime Version:  containerd://1.4.4
	I0526 21:25:17.234520  527485 command_runner.go:124] >   Kubelet Version:            v1.20.2
	I0526 21:25:17.234531  527485 command_runner.go:124] >   Kube-Proxy Version:         v1.20.2
	I0526 21:25:17.234539  527485 command_runner.go:124] > PodCIDR:                      10.244.0.0/24
	I0526 21:25:17.234549  527485 command_runner.go:124] > PodCIDRs:                     10.244.0.0/24
	I0526 21:25:17.234558  527485 command_runner.go:124] > Non-terminated Pods:          (8 in total)
	I0526 21:25:17.234572  527485 command_runner.go:124] >   Namespace                   Name                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	I0526 21:25:17.234583  527485 command_runner.go:124] >   ---------                   ----                                                       ------------  ----------  ---------------  -------------  ---
	I0526 21:25:17.234601  527485 command_runner.go:124] >   kube-system                 coredns-74ff55c5b-tw67b                                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (7%!)(MISSING)     84s
	I0526 21:25:17.234619  527485 command_runner.go:124] >   kube-system                 etcd-multinode-20210526212238-510955                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         93s
	I0526 21:25:17.234636  527485 command_runner.go:124] >   kube-system                 kindnet-2wgbs                                              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      84s
	I0526 21:25:17.234654  527485 command_runner.go:124] >   kube-system                 kube-apiserver-multinode-20210526212238-510955             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         93s
	I0526 21:25:17.234690  527485 command_runner.go:124] >   kube-system                 kube-controller-manager-multinode-20210526212238-510955    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         93s
	I0526 21:25:17.234708  527485 command_runner.go:124] >   kube-system                 kube-proxy-qbl42                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         84s
	I0526 21:25:17.234726  527485 command_runner.go:124] >   kube-system                 kube-scheduler-multinode-20210526212238-510955             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         93s
	I0526 21:25:17.234747  527485 command_runner.go:124] >   kube-system                 storage-provisioner                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         82s
	I0526 21:25:17.234757  527485 command_runner.go:124] > Allocated resources:
	I0526 21:25:17.234769  527485 command_runner.go:124] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0526 21:25:17.234780  527485 command_runner.go:124] >   Resource           Requests     Limits
	I0526 21:25:17.234788  527485 command_runner.go:124] >   --------           --------     ------
	I0526 21:25:17.234796  527485 command_runner.go:124] >   cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	I0526 21:25:17.234807  527485 command_runner.go:124] >   memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	I0526 21:25:17.234815  527485 command_runner.go:124] >   ephemeral-storage  100Mi (0%!)(MISSING)   0 (0%!)(MISSING)
	I0526 21:25:17.234825  527485 command_runner.go:124] >   hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	I0526 21:25:17.234830  527485 command_runner.go:124] > Events:
	I0526 21:25:17.234837  527485 command_runner.go:124] >   Type    Reason                   Age                  From        Message
	I0526 21:25:17.234845  527485 command_runner.go:124] >   ----    ------                   ----                 ----        -------
	I0526 21:25:17.234852  527485 command_runner.go:124] >   Normal  Starting                 110s                 kubelet     Starting kubelet.
	I0526 21:25:17.234862  527485 command_runner.go:124] >   Normal  NodeHasSufficientMemory  109s (x4 over 110s)  kubelet     Node multinode-20210526212238-510955 status is now: NodeHasSufficientMemory
	I0526 21:25:17.234873  527485 command_runner.go:124] >   Normal  NodeHasNoDiskPressure    109s (x3 over 110s)  kubelet     Node multinode-20210526212238-510955 status is now: NodeHasNoDiskPressure
	I0526 21:25:17.234884  527485 command_runner.go:124] >   Normal  NodeHasSufficientPID     109s (x3 over 110s)  kubelet     Node multinode-20210526212238-510955 status is now: NodeHasSufficientPID
	I0526 21:25:17.234894  527485 command_runner.go:124] >   Normal  NodeAllocatableEnforced  109s                 kubelet     Updated Node Allocatable limit across pods
	I0526 21:25:17.234901  527485 command_runner.go:124] >   Normal  Starting                 94s                  kubelet     Starting kubelet.
	I0526 21:25:17.234911  527485 command_runner.go:124] >   Normal  NodeHasSufficientMemory  93s                  kubelet     Node multinode-20210526212238-510955 status is now: NodeHasSufficientMemory
	I0526 21:25:17.234923  527485 command_runner.go:124] >   Normal  NodeHasNoDiskPressure    93s                  kubelet     Node multinode-20210526212238-510955 status is now: NodeHasNoDiskPressure
	I0526 21:25:17.234940  527485 command_runner.go:124] >   Normal  NodeHasSufficientPID     93s                  kubelet     Node multinode-20210526212238-510955 status is now: NodeHasSufficientPID
	I0526 21:25:17.234956  527485 command_runner.go:124] >   Normal  NodeAllocatableEnforced  93s                  kubelet     Updated Node Allocatable limit across pods
	I0526 21:25:17.234968  527485 command_runner.go:124] >   Normal  Starting                 83s                  kube-proxy  Starting kube-proxy.
	I0526 21:25:17.234984  527485 command_runner.go:124] >   Normal  NodeReady                73s                  kubelet     Node multinode-20210526212238-510955 status is now: NodeReady
	I0526 21:25:17.238081  527485 logs.go:123] Gathering logs for etcd [c8538106e966bebc2d596c1688fdb11c413a37e122bd18c454b4db5bec7f55ad] ...
	I0526 21:25:17.238102  527485 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 c8538106e966bebc2d596c1688fdb11c413a37e122bd18c454b4db5bec7f55ad"
	I0526 21:25:17.260189  527485 command_runner.go:124] ! [WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead
	I0526 21:25:17.260243  527485 command_runner.go:124] ! 2021-05-26 21:23:30.145280 I | etcdmain: etcd Version: 3.4.13
	I0526 21:25:17.260535  527485 command_runner.go:124] ! 2021-05-26 21:23:30.146007 I | etcdmain: Git SHA: ae9734ed2
	I0526 21:25:17.260584  527485 command_runner.go:124] ! 2021-05-26 21:23:30.146359 I | etcdmain: Go Version: go1.12.17
	I0526 21:25:17.260944  527485 command_runner.go:124] ! 2021-05-26 21:23:30.146935 I | etcdmain: Go OS/Arch: linux/amd64
	I0526 21:25:17.261134  527485 command_runner.go:124] ! 2021-05-26 21:23:30.147549 I | etcdmain: setting maximum number of CPUs to 2, total number of available CPUs is 2
	I0526 21:25:17.261201  527485 command_runner.go:124] ! [WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead
	I0526 21:25:17.261504  527485 command_runner.go:124] ! 2021-05-26 21:23:30.148927 I | embed: peerTLS: cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	I0526 21:25:17.261560  527485 command_runner.go:124] ! 2021-05-26 21:23:30.159191 I | embed: name = multinode-20210526212238-510955
	I0526 21:25:17.261864  527485 command_runner.go:124] ! 2021-05-26 21:23:30.159781 I | embed: data dir = /var/lib/minikube/etcd
	I0526 21:25:17.261951  527485 command_runner.go:124] ! 2021-05-26 21:23:30.161368 I | embed: member dir = /var/lib/minikube/etcd/member
	I0526 21:25:17.262261  527485 command_runner.go:124] ! 2021-05-26 21:23:30.161781 I | embed: heartbeat = 100ms
	I0526 21:25:17.262280  527485 command_runner.go:124] ! 2021-05-26 21:23:30.162024 I | embed: election = 1000ms
	I0526 21:25:17.262419  527485 command_runner.go:124] ! 2021-05-26 21:23:30.162419 I | embed: snapshot count = 10000
	I0526 21:25:17.262485  527485 command_runner.go:124] ! 2021-05-26 21:23:30.162834 I | embed: advertise client URLs = https://192.168.39.229:2379
	I0526 21:25:17.262700  527485 command_runner.go:124] ! 2021-05-26 21:23:30.186657 I | etcdserver: starting member b8647f2870156d71 in cluster 2bfbf13ce68722b
	I0526 21:25:17.262979  527485 command_runner.go:124] ! raft2021/05/26 21:23:30 INFO: b8647f2870156d71 switched to configuration voters=()
	I0526 21:25:17.263063  527485 command_runner.go:124] ! raft2021/05/26 21:23:30 INFO: b8647f2870156d71 became follower at term 0
	I0526 21:25:17.263227  527485 command_runner.go:124] ! raft2021/05/26 21:23:30 INFO: newRaft b8647f2870156d71 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	I0526 21:25:17.263377  527485 command_runner.go:124] ! raft2021/05/26 21:23:30 INFO: b8647f2870156d71 became follower at term 1
	I0526 21:25:17.263437  527485 command_runner.go:124] ! raft2021/05/26 21:23:30 INFO: b8647f2870156d71 switched to configuration voters=(13286884612305677681)
	I0526 21:25:17.263679  527485 command_runner.go:124] ! 2021-05-26 21:23:30.205555 W | auth: simple token is not cryptographically signed
	I0526 21:25:17.263960  527485 command_runner.go:124] ! 2021-05-26 21:23:30.234208 I | etcdserver: starting server... [version: 3.4.13, cluster version: to_be_decided]
	I0526 21:25:17.264088  527485 command_runner.go:124] ! 2021-05-26 21:23:30.243414 I | etcdserver: b8647f2870156d71 as single-node; fast-forwarding 9 ticks (election ticks 10)
	I0526 21:25:17.264181  527485 command_runner.go:124] ! raft2021/05/26 21:23:30 INFO: b8647f2870156d71 switched to configuration voters=(13286884612305677681)
	I0526 21:25:17.264437  527485 command_runner.go:124] ! 2021-05-26 21:23:30.255082 I | etcdserver/membership: added member b8647f2870156d71 [https://192.168.39.229:2380] to cluster 2bfbf13ce68722b
	I0526 21:25:17.264505  527485 command_runner.go:124] ! 2021-05-26 21:23:30.261097 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	I0526 21:25:17.264722  527485 command_runner.go:124] ! 2021-05-26 21:23:30.264526 I | embed: listening for peers on 192.168.39.229:2380
	I0526 21:25:17.264938  527485 command_runner.go:124] ! 2021-05-26 21:23:30.264701 I | embed: listening for metrics on http://127.0.0.1:2381
	I0526 21:25:17.265081  527485 command_runner.go:124] ! raft2021/05/26 21:23:30 INFO: b8647f2870156d71 is starting a new election at term 1
	I0526 21:25:17.265422  527485 command_runner.go:124] ! raft2021/05/26 21:23:30 INFO: b8647f2870156d71 became candidate at term 2
	I0526 21:25:17.265514  527485 command_runner.go:124] ! raft2021/05/26 21:23:30 INFO: b8647f2870156d71 received MsgVoteResp from b8647f2870156d71 at term 2
	I0526 21:25:17.265592  527485 command_runner.go:124] ! raft2021/05/26 21:23:30 INFO: b8647f2870156d71 became leader at term 2
	I0526 21:25:17.265838  527485 command_runner.go:124] ! raft2021/05/26 21:23:30 INFO: raft.node: b8647f2870156d71 elected leader b8647f2870156d71 at term 2
	I0526 21:25:17.265897  527485 command_runner.go:124] ! 2021-05-26 21:23:30.893688 I | etcdserver: setting up the initial cluster version to 3.4
	I0526 21:25:17.266124  527485 command_runner.go:124] ! 2021-05-26 21:23:30.897562 I | embed: ready to serve client requests
	I0526 21:25:17.266399  527485 command_runner.go:124] ! 2021-05-26 21:23:30.897893 I | etcdserver: published {Name:multinode-20210526212238-510955 ClientURLs:[https://192.168.39.229:2379]} to cluster 2bfbf13ce68722b
	I0526 21:25:17.266417  527485 command_runner.go:124] ! 2021-05-26 21:23:30.898097 I | embed: ready to serve client requests
	I0526 21:25:17.266428  527485 command_runner.go:124] ! 2021-05-26 21:23:30.904911 I | embed: serving client requests on 127.0.0.1:2379
	I0526 21:25:17.266444  527485 command_runner.go:124] ! 2021-05-26 21:23:30.925406 I | embed: serving client requests on 192.168.39.229:2379
	I0526 21:25:17.266454  527485 command_runner.go:124] ! 2021-05-26 21:23:30.930764 N | etcdserver/membership: set the initial cluster version to 3.4
	I0526 21:25:17.266464  527485 command_runner.go:124] ! 2021-05-26 21:23:30.973015 I | etcdserver/api: enabled capabilities for version 3.4
	I0526 21:25:17.266476  527485 command_runner.go:124] ! 2021-05-26 21:23:35.005110 W | etcdserver: read-only range request "key:\"/registry/ranges/servicenodeports\" " with result "range_response_count:0 size:4" took too long (158.136927ms) to execute
	I0526 21:25:17.266495  527485 command_runner.go:124] ! 2021-05-26 21:23:35.008540 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/etcd-multinode-20210526212238-510955\" " with result "range_response_count:0 size:4" took too long (159.3133ms) to execute
	I0526 21:25:17.266508  527485 command_runner.go:124] ! 2021-05-26 21:23:35.012635 W | etcdserver: read-only range request "key:\"/registry/namespaces/kube-system\" " with result "range_response_count:0 size:4" took too long (107.936302ms) to execute
	I0526 21:25:17.266524  527485 command_runner.go:124] ! 2021-05-26 21:23:35.013064 W | etcdserver: read-only range request "key:\"/registry/csinodes/multinode-20210526212238-510955\" " with result "range_response_count:0 size:4" took too long (148.811077ms) to execute
	I0526 21:25:17.266537  527485 command_runner.go:124] ! 2021-05-26 21:23:35.013577 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:4" took too long (157.477156ms) to execute
	I0526 21:25:17.266546  527485 command_runner.go:124] ! 2021-05-26 21:23:48.034379 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	I0526 21:25:17.266553  527485 command_runner.go:124] ! 2021-05-26 21:23:50.916831 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	I0526 21:25:17.266561  527485 command_runner.go:124] ! 2021-05-26 21:24:00.917857 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	I0526 21:25:17.266568  527485 command_runner.go:124] ! 2021-05-26 21:24:10.918220 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	I0526 21:25:17.266576  527485 command_runner.go:124] ! 2021-05-26 21:24:20.917896 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	I0526 21:25:17.266583  527485 command_runner.go:124] ! 2021-05-26 21:24:30.916918 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	I0526 21:25:17.266590  527485 command_runner.go:124] ! 2021-05-26 21:24:40.917190 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	I0526 21:25:17.266598  527485 command_runner.go:124] ! 2021-05-26 21:24:50.917237 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	I0526 21:25:17.266605  527485 command_runner.go:124] ! 2021-05-26 21:25:00.916673 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	I0526 21:25:17.266613  527485 command_runner.go:124] ! 2021-05-26 21:25:10.921256 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	I0526 21:25:17.270461  527485 logs.go:123] Gathering logs for containerd ...
	I0526 21:25:17.270487  527485 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0526 21:25:17.305712  527485 command_runner.go:124] > -- Logs begin at Wed 2021-05-26 21:22:49 UTC, end at Wed 2021-05-26 21:25:17 UTC. --
	I0526 21:25:17.305741  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 systemd[1]: Starting containerd container runtime...
	I0526 21:25:17.305752  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 systemd[1]: Started containerd container runtime.
	I0526 21:25:17.305779  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.412639957Z" level=info msg="starting containerd" revision=05f951a3781f4f2c1911b05e61c160e9c30eaa8e version=v1.4.4
	I0526 21:25:17.305804  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.454795725Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0526 21:25:17.305820  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.455022736Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0526 21:25:17.305844  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.456819758Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/4.19.182\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0526 21:25:17.305861  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.456940685Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0526 21:25:17.305881  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.457199432Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0526 21:25:17.305898  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.457299817Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0526 21:25:17.305915  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.457342626Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0526 21:25:17.305930  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.457353348Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0526 21:25:17.305946  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.457375564Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0526 21:25:17.305962  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.457518971Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0526 21:25:17.305984  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.457752665Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0526 21:25:17.305999  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.457768067Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0526 21:25:17.306015  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.457801760Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0526 21:25:17.306029  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.457811694Z" level=info msg="metadata content store policy set" policy=shared
	I0526 21:25:17.306048  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.461742670Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0526 21:25:17.306068  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.461851430Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0526 21:25:17.306083  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462036878Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0526 21:25:17.306099  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462069131Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0526 21:25:17.306114  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462082171Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0526 21:25:17.306130  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462094524Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0526 21:25:17.306145  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462115116Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0526 21:25:17.306160  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462127721Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1
	I0526 21:25:17.306176  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462139766Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0526 21:25:17.306195  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462157542Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0526 21:25:17.306213  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462167923Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0526 21:25:17.306228  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462295610Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0526 21:25:17.306244  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462357720Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0526 21:25:17.306260  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462745295Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0526 21:25:17.306276  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462770123Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0526 21:25:17.306291  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462815565Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0526 21:25:17.306307  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462827921Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0526 21:25:17.306323  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462846347Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0526 21:25:17.306338  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462857513Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0526 21:25:17.306352  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462870788Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0526 21:25:17.306369  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462881154Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0526 21:25:17.306386  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462892049Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0526 21:25:17.306402  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462903002Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0526 21:25:17.306417  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462913917Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0526 21:25:17.306432  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462958764Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0526 21:25:17.306447  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462972025Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0526 21:25:17.306461  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462983386Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0526 21:25:17.306475  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462994704Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1
	I0526 21:25:17.306493  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.463133131Z" level=warning msg="failed to load plugin io.containerd.grpc.v1.cri" error="invalid plugin config: `systemd_cgroup` only works for runtime io.containerd.runtime.v1.linux"
	I0526 21:25:17.306509  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.463145276Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0526 21:25:17.306523  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.463363744Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	I0526 21:25:17.306537  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.463401676Z" level=info msg=serving... address=/run/containerd/containerd.sock
	I0526 21:25:17.306550  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.463415404Z" level=info msg="containerd successfully booted in 0.052163s"
	I0526 21:25:17.306560  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 systemd[1]: Stopping containerd container runtime...
	I0526 21:25:17.306572  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 systemd[1]: containerd.service: Succeeded.
	I0526 21:25:17.306584  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 systemd[1]: Stopped containerd container runtime.
	I0526 21:25:17.306594  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 systemd[1]: Starting containerd container runtime...
	I0526 21:25:17.306604  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 systemd[1]: Started containerd container runtime.
	I0526 21:25:17.306617  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.677351233Z" level=info msg="starting containerd" revision=05f951a3781f4f2c1911b05e61c160e9c30eaa8e version=v1.4.4
	I0526 21:25:17.306632  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.703735354Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0526 21:25:17.306648  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.703939180Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0526 21:25:17.306671  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.706070962Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/4.19.182\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0526 21:25:17.306689  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.706222939Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0526 21:25:17.306712  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.706683734Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0526 21:25:17.306728  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.706837938Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0526 21:25:17.306743  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.706963959Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0526 21:25:17.306763  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.707081760Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0526 21:25:17.306778  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.707216688Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0526 21:25:17.306796  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.707381113Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0526 21:25:17.306821  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.707841019Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0526 21:25:17.306836  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.707973506Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0526 21:25:17.306853  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.708095816Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0526 21:25:17.306868  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.708236930Z" level=info msg="metadata content store policy set" policy=shared
	I0526 21:25:17.306884  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.708536776Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0526 21:25:17.306898  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.708698510Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0526 21:25:17.306916  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.708937323Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0526 21:25:17.306932  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.709074999Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0526 21:25:17.306948  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.709196994Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0526 21:25:17.306963  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.709315424Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0526 21:25:17.306979  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.709506686Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0526 21:25:17.306996  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.709629192Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1
	I0526 21:25:17.307025  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.709743913Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0526 21:25:17.307041  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.709857985Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0526 21:25:17.307056  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.709979410Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0526 21:25:17.307072  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.710125076Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0526 21:25:17.307087  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.710271949Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0526 21:25:17.307103  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.710830775Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0526 21:25:17.307119  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.710974791Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0526 21:25:17.307135  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.711117145Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0526 21:25:17.307150  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.711243334Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0526 21:25:17.307165  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.711363735Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0526 21:25:17.307179  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.711549081Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0526 21:25:17.307194  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.711666234Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0526 21:25:17.307209  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.711781506Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0526 21:25:17.307223  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.711895813Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0526 21:25:17.307248  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.712013139Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0526 21:25:17.307263  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.712131897Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0526 21:25:17.307278  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.712269473Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0526 21:25:17.307293  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.712503525Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0526 21:25:17.307308  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.712659007Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0526 21:25:17.307324  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.712779064Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1
	I0526 21:25:17.307342  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.712986218Z" level=warning msg="`default_runtime` is deprecated, please use `default_runtime_name` to reference the default configuration you have defined in `runtimes`"
	I0526 21:25:17.307439  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.713141331Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:default DefaultRuntime:{Type:io.containerd.runc.v2 Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:0xc000155fb0 PrivilegedWithoutHostDevices:false BaseRuntimeSpec:} UntrustedWorkloadRuntime:{Type: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:<nil> PrivilegedWithoutHostDevices:false BaseRuntimeSpec:} Runtimes:map[default:{Type:io.containerd.runc.v2 Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:0xc000155fb0 PrivilegedWithoutHostDevices:false BaseRuntimeSpec:} runc:{Type:io.containerd.runc.v2 Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:0xc00037b050 PrivilegedWithoutHostDevices:false BaseRuntimeSpec:}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpac
kedLayers:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.mk NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate:} Registry:{Mirrors:map[docker.io:{Endpoints:[https://registry-1.docker.io]}] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:} DisableTCPService:true StreamServerAddress: StreamServerPort:10010 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:k8s.gcr.io/pause:3.2 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true IgnoreImageDefinedVolumes:false} ContainerdRootDir:/mnt/vda1/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/mnt/vda1/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/r
un/containerd/io.containerd.grpc.v1.cri}"
	I0526 21:25:17.307456  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.713322225Z" level=info msg="Connect containerd service"
	I0526 21:25:17.307470  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.713538361Z" level=info msg="Get image filesystem path \"/mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\""
	I0526 21:25:17.307491  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.714213931Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.mk: cni plugin not initialized: failed to load cni config"
	I0526 21:25:17.307507  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.714359921Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0526 21:25:17.307523  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.714868242Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	I0526 21:25:17.307537  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.715023618Z" level=info msg=serving... address=/run/containerd/containerd.sock
	I0526 21:25:17.307550  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.715142631Z" level=info msg="containerd successfully booted in 0.038760s"
	I0526 21:25:17.307563  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.726087774Z" level=info msg="Start subscribing containerd event"
	I0526 21:25:17.307572  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.726733995Z" level=info msg="Start recovering state"
	I0526 21:25:17.307586  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.781395051Z" level=info msg="Start event monitor"
	I0526 21:25:17.307599  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.781771001Z" level=info msg="Start snapshots syncer"
	I0526 21:25:17.307612  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.781893491Z" level=info msg="Start cni network conf syncer"
	I0526 21:25:17.307624  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.782003464Z" level=info msg="Start streaming server"
	I0526 21:25:17.307641  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:28.484581294Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:kube-controller-manager-multinode-20210526212238-510955,Uid:474c55dfb64741cc485e46b6bb9f2dc0,Namespace:kube-system,Attempt:0,}"
	I0526 21:25:17.307659  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:28.490843770Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:kube-scheduler-multinode-20210526212238-510955,Uid:6b4a0ee8b3d15a1c2e47c15d32e6eb0d,Namespace:kube-system,Attempt:0,}"
	I0526 21:25:17.307679  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:28.501056680Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:kube-apiserver-multinode-20210526212238-510955,Uid:b42b6879229f245abab6047de8662a2f,Namespace:kube-system,Attempt:0,}"
	I0526 21:25:17.307697  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:28.508591647Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:etcd-multinode-20210526212238-510955,Uid:34530b4d5ce1b17919f3b8976b2d0456,Namespace:kube-system,Attempt:0,}"
	I0526 21:25:17.307716  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:28.580716340Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fe43674906f2080850da99c25995a18c2583bfda5a6a21d58f51cb45f673d486 pid=2407
	I0526 21:25:17.307738  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:28.598809833Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/24fd8b8599a6ee5e09c19d4ce15908360ea29418f2bbe3b0ba2d12f73a3519fb pid=2435
	I0526 21:25:17.307762  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:28.602060491Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/73ada73fbbf0b2a7b4a40791347e9a5a366e1f52a347203f20a27bcb2813b6c5 pid=2434
	I0526 21:25:17.307782  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:28.602007310Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2ad404c6a9c449ae1ebfab12355673229979a8ee4cf4d87f94b5ca073d31b43e pid=2452
	I0526 21:25:17.307804  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:29.066808539Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-multinode-20210526212238-510955,Uid:b42b6879229f245abab6047de8662a2f,Namespace:kube-system,Attempt:0,} returns sandbox id \"fe43674906f2080850da99c25995a18c2583bfda5a6a21d58f51cb45f673d486\""
	I0526 21:25:17.307824  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:29.074803022Z" level=info msg="CreateContainer within sandbox \"fe43674906f2080850da99c25995a18c2583bfda5a6a21d58f51cb45f673d486\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}"
	I0526 21:25:17.307846  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:29.194718464Z" level=info msg="CreateContainer within sandbox \"fe43674906f2080850da99c25995a18c2583bfda5a6a21d58f51cb45f673d486\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a0581c0e5409bf38a71c61d4cf776a7fcd3dc38c24a5a6a52165552afd2bc85c\""
	I0526 21:25:17.307861  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:29.196219933Z" level=info msg="StartContainer for \"a0581c0e5409bf38a71c61d4cf776a7fcd3dc38c24a5a6a52165552afd2bc85c\""
	I0526 21:25:17.307885  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:29.262678371Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-multinode-20210526212238-510955,Uid:474c55dfb64741cc485e46b6bb9f2dc0,Namespace:kube-system,Attempt:0,} returns sandbox id \"73ada73fbbf0b2a7b4a40791347e9a5a366e1f52a347203f20a27bcb2813b6c5\""
	I0526 21:25:17.307905  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:29.272571919Z" level=info msg="CreateContainer within sandbox \"73ada73fbbf0b2a7b4a40791347e9a5a366e1f52a347203f20a27bcb2813b6c5\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}"
	I0526 21:25:17.307927  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:29.347228547Z" level=info msg="CreateContainer within sandbox \"73ada73fbbf0b2a7b4a40791347e9a5a366e1f52a347203f20a27bcb2813b6c5\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"2314e41b1b44395e7b5a7c32d6859941d4a5db601171f0dd18ddefb571d25f18\""
	I0526 21:25:17.307943  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:29.349365690Z" level=info msg="StartContainer for \"2314e41b1b44395e7b5a7c32d6859941d4a5db601171f0dd18ddefb571d25f18\""
	I0526 21:25:17.307960  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:29.419043703Z" level=info msg="StartContainer for \"a0581c0e5409bf38a71c61d4cf776a7fcd3dc38c24a5a6a52165552afd2bc85c\" returns successfully"
	I0526 21:25:17.307982  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:29.520520792Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-multinode-20210526212238-510955,Uid:6b4a0ee8b3d15a1c2e47c15d32e6eb0d,Namespace:kube-system,Attempt:0,} returns sandbox id \"24fd8b8599a6ee5e09c19d4ce15908360ea29418f2bbe3b0ba2d12f73a3519fb\""
	I0526 21:25:17.308004  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:29.527415671Z" level=info msg="CreateContainer within sandbox \"24fd8b8599a6ee5e09c19d4ce15908360ea29418f2bbe3b0ba2d12f73a3519fb\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}"
	I0526 21:25:17.308026  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:29.566421321Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:etcd-multinode-20210526212238-510955,Uid:34530b4d5ce1b17919f3b8976b2d0456,Namespace:kube-system,Attempt:0,} returns sandbox id \"2ad404c6a9c449ae1ebfab12355673229979a8ee4cf4d87f94b5ca073d31b43e\""
	I0526 21:25:17.308046  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:29.575850717Z" level=info msg="CreateContainer within sandbox \"2ad404c6a9c449ae1ebfab12355673229979a8ee4cf4d87f94b5ca073d31b43e\" for container &ContainerMetadata{Name:etcd,Attempt:0,}"
	I0526 21:25:17.308070  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:29.621335319Z" level=info msg="CreateContainer within sandbox \"24fd8b8599a6ee5e09c19d4ce15908360ea29418f2bbe3b0ba2d12f73a3519fb\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e6bb9bee7539aa8f41eb665706dd077c17b8c28a54090cf4fffe439978194c08\""
	I0526 21:25:17.308086  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:29.623169879Z" level=info msg="StartContainer for \"e6bb9bee7539aa8f41eb665706dd077c17b8c28a54090cf4fffe439978194c08\""
	I0526 21:25:17.308105  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:29.681255114Z" level=info msg="StartContainer for \"2314e41b1b44395e7b5a7c32d6859941d4a5db601171f0dd18ddefb571d25f18\" returns successfully"
	I0526 21:25:17.308127  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:29.683704929Z" level=info msg="CreateContainer within sandbox \"2ad404c6a9c449ae1ebfab12355673229979a8ee4cf4d87f94b5ca073d31b43e\" for &ContainerMetadata{Name:etcd,Attempt:0,} returns container id \"c8538106e966bebc2d596c1688fdb11c413a37e122bd18c454b4db5bec7f55ad\""
	I0526 21:25:17.308143  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:29.684577023Z" level=info msg="StartContainer for \"c8538106e966bebc2d596c1688fdb11c413a37e122bd18c454b4db5bec7f55ad\""
	I0526 21:25:17.308160  527485 command_runner.go:124] > May 26 21:23:30 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:30.017920282Z" level=info msg="StartContainer for \"c8538106e966bebc2d596c1688fdb11c413a37e122bd18c454b4db5bec7f55ad\" returns successfully"
	I0526 21:25:17.308177  527485 command_runner.go:124] > May 26 21:23:30 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:30.056525418Z" level=info msg="StartContainer for \"e6bb9bee7539aa8f41eb665706dd077c17b8c28a54090cf4fffe439978194c08\" returns successfully"
	I0526 21:25:17.308215  527485 command_runner.go:124] > May 26 21:23:53 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:53.290788536Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
	I0526 21:25:17.308233  527485 command_runner.go:124] > May 26 21:23:53 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:53.802102062Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:kindnet-2wgbs,Uid:aac3ff91-8f9c-4f4e-81fc-a859f780d67d,Namespace:kube-system,Attempt:0,}"
	I0526 21:25:17.308256  527485 command_runner.go:124] > May 26 21:23:53 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:53.839975209Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/53490c652b9e5b3b552f7ca74d9a84e9c42a1849a932e5f024b22d340d5734e8 pid=2987
	I0526 21:25:17.308274  527485 command_runner.go:124] > May 26 21:23:53 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:53.915628984Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:kube-proxy-qbl42,Uid:950a915d-c5f0-4e6f-bc12-ee97013032f0,Namespace:kube-system,Attempt:0,}"
	I0526 21:25:17.308294  527485 command_runner.go:124] > May 26 21:23:53 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:53.950847165Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/038c42970362d9798abb36c3983856aa352e67a59ca9ce5f3e1852c03634a59a pid=3013
	I0526 21:25:17.308316  527485 command_runner.go:124] > May 26 21:23:54 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:54.116312794Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qbl42,Uid:950a915d-c5f0-4e6f-bc12-ee97013032f0,Namespace:kube-system,Attempt:0,} returns sandbox id \"038c42970362d9798abb36c3983856aa352e67a59ca9ce5f3e1852c03634a59a\""
	I0526 21:25:17.308342  527485 command_runner.go:124] > May 26 21:23:54 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:54.127305490Z" level=info msg="CreateContainer within sandbox \"038c42970362d9798abb36c3983856aa352e67a59ca9ce5f3e1852c03634a59a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}"
	I0526 21:25:17.308364  527485 command_runner.go:124] > May 26 21:23:54 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:54.182202148Z" level=info msg="CreateContainer within sandbox \"038c42970362d9798abb36c3983856aa352e67a59ca9ce5f3e1852c03634a59a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"de6efc6fec4b20903efb849817a9e57e284841a9fc6ef9bd1b83e563316cdaa2\""
	I0526 21:25:17.308380  527485 command_runner.go:124] > May 26 21:23:54 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:54.188910123Z" level=info msg="StartContainer for \"de6efc6fec4b20903efb849817a9e57e284841a9fc6ef9bd1b83e563316cdaa2\""
	I0526 21:25:17.308397  527485 command_runner.go:124] > May 26 21:23:54 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:54.381612238Z" level=info msg="StartContainer for \"de6efc6fec4b20903efb849817a9e57e284841a9fc6ef9bd1b83e563316cdaa2\" returns successfully"
	I0526 21:25:17.308417  527485 command_runner.go:124] > May 26 21:23:55 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:55.674364903Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/library/minikube-local-cache-test:functional-20210526211257-510955,Labels:map[string]string{},XXX_unrecognized:[],}"
	I0526 21:25:17.308437  527485 command_runner.go:124] > May 26 21:23:55 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:55.683119285Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d019ff3187ef5660d1df17b8caf469d5fc50b72267134348e040397c4d49d830,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
	I0526 21:25:17.308459  527485 command_runner.go:124] > May 26 21:23:55 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:55.683711665Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/library/minikube-local-cache-test:functional-20210526211257-510955,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
	I0526 21:25:17.308476  527485 command_runner.go:124] > May 26 21:23:56 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:56.582858367Z" level=error msg="get state for 53490c652b9e5b3b552f7ca74d9a84e9c42a1849a932e5f024b22d340d5734e8" error="context deadline exceeded: unknown"
	I0526 21:25:17.308489  527485 command_runner.go:124] > May 26 21:23:56 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:56.582967226Z" level=warning msg="unknown status" status=0
	I0526 21:25:17.308510  527485 command_runner.go:124] > May 26 21:23:56 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:56.969753374Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kindnet-2wgbs,Uid:aac3ff91-8f9c-4f4e-81fc-a859f780d67d,Namespace:kube-system,Attempt:0,} returns sandbox id \"53490c652b9e5b3b552f7ca74d9a84e9c42a1849a932e5f024b22d340d5734e8\""
	I0526 21:25:17.308531  527485 command_runner.go:124] > May 26 21:23:56 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:56.975070195Z" level=info msg="CreateContainer within sandbox \"53490c652b9e5b3b552f7ca74d9a84e9c42a1849a932e5f024b22d340d5734e8\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:0,}"
	I0526 21:25:17.308553  527485 command_runner.go:124] > May 26 21:23:57 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:57.027887855Z" level=info msg="CreateContainer within sandbox \"53490c652b9e5b3b552f7ca74d9a84e9c42a1849a932e5f024b22d340d5734e8\" for &ContainerMetadata{Name:kindnet-cni,Attempt:0,} returns container id \"69df1859ce4d1a30c4660b7f63cb09e13d69f3813d39620e6ca8dc830b4388bf\""
	I0526 21:25:17.308571  527485 command_runner.go:124] > May 26 21:23:57 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:57.029566085Z" level=info msg="StartContainer for \"69df1859ce4d1a30c4660b7f63cb09e13d69f3813d39620e6ca8dc830b4388bf\""
	I0526 21:25:17.308587  527485 command_runner.go:124] > May 26 21:23:57 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:57.574608517Z" level=info msg="StartContainer for \"69df1859ce4d1a30c4660b7f63cb09e13d69f3813d39620e6ca8dc830b4388bf\" returns successfully"
	I0526 21:25:17.308605  527485 command_runner.go:124] > May 26 21:24:09 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:24:09.297649575Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:storage-provisioner,Uid:e29f2487-1ac5-4bb4-9ab8-6b5d5c43ec36,Namespace:kube-system,Attempt:0,}"
	I0526 21:25:17.308623  527485 command_runner.go:124] > May 26 21:24:09 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:24:09.323344186Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:coredns-74ff55c5b-tw67b,Uid:a0522c32-9960-4c21-8a5a-d0b137009166,Namespace:kube-system,Attempt:0,}"
	I0526 21:25:17.308641  527485 command_runner.go:124] > May 26 21:24:09 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:24:09.332120092Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/722b1b257c571a73cb4686c5476aba37030267ae95e826e47362be9c166adb55 pid=3313
	I0526 21:25:17.308660  527485 command_runner.go:124] > May 26 21:24:09 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:24:09.442356819Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1d96eb581f035bbd8a09d1caefefe610196dd7fb21d1b74e5f155bddc0a54900 pid=3376
	I0526 21:25:17.308681  527485 command_runner.go:124] > May 26 21:24:09 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:24:09.792546853Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:storage-provisioner,Uid:e29f2487-1ac5-4bb4-9ab8-6b5d5c43ec36,Namespace:kube-system,Attempt:0,} returns sandbox id \"722b1b257c571a73cb4686c5476aba37030267ae95e826e47362be9c166adb55\""
	I0526 21:25:17.308702  527485 command_runner.go:124] > May 26 21:24:09 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:24:09.796339883Z" level=info msg="CreateContainer within sandbox \"722b1b257c571a73cb4686c5476aba37030267ae95e826e47362be9c166adb55\" for container &ContainerMetadata{Name:storage-provisioner,Attempt:0,}"
	I0526 21:25:17.308724  527485 command_runner.go:124] > May 26 21:24:09 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:24:09.843281999Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-74ff55c5b-tw67b,Uid:a0522c32-9960-4c21-8a5a-d0b137009166,Namespace:kube-system,Attempt:0,} returns sandbox id \"1d96eb581f035bbd8a09d1caefefe610196dd7fb21d1b74e5f155bddc0a54900\""
	I0526 21:25:17.308744  527485 command_runner.go:124] > May 26 21:24:09 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:24:09.849108598Z" level=info msg="CreateContainer within sandbox \"1d96eb581f035bbd8a09d1caefefe610196dd7fb21d1b74e5f155bddc0a54900\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
	I0526 21:25:17.308770  527485 command_runner.go:124] > May 26 21:24:09 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:24:09.875948742Z" level=info msg="CreateContainer within sandbox \"722b1b257c571a73cb4686c5476aba37030267ae95e826e47362be9c166adb55\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"5d3df8c94eaedafbc7bb1e79012cbc72bc2ec88687f4bfc6f10fff8c1eb3107d\""
	I0526 21:25:17.308786  527485 command_runner.go:124] > May 26 21:24:09 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:24:09.879073015Z" level=info msg="StartContainer for \"5d3df8c94eaedafbc7bb1e79012cbc72bc2ec88687f4bfc6f10fff8c1eb3107d\""
	I0526 21:25:17.308807  527485 command_runner.go:124] > May 26 21:24:09 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:24:09.915826719Z" level=info msg="CreateContainer within sandbox \"1d96eb581f035bbd8a09d1caefefe610196dd7fb21d1b74e5f155bddc0a54900\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a9593dff4428d4d0f5cc31832cefb42bf39b62a98d32d295dd9502a1ef6d307a\""
	I0526 21:25:17.308823  527485 command_runner.go:124] > May 26 21:24:09 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:24:09.918179651Z" level=info msg="StartContainer for \"a9593dff4428d4d0f5cc31832cefb42bf39b62a98d32d295dd9502a1ef6d307a\""
	I0526 21:25:17.308839  527485 command_runner.go:124] > May 26 21:24:10 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:24:10.083539707Z" level=info msg="StartContainer for \"5d3df8c94eaedafbc7bb1e79012cbc72bc2ec88687f4bfc6f10fff8c1eb3107d\" returns successfully"
	I0526 21:25:17.308855  527485 command_runner.go:124] > May 26 21:24:10 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:24:10.120722012Z" level=info msg="StartContainer for \"a9593dff4428d4d0f5cc31832cefb42bf39b62a98d32d295dd9502a1ef6d307a\" returns successfully"
	I0526 21:25:17.325604  527485 logs.go:123] Gathering logs for container status ...
	I0526 21:25:17.325623  527485 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0526 21:25:17.345401  527485 command_runner.go:124] > CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	I0526 21:25:17.345429  527485 command_runner.go:124] > a9593dff4428d       bfe3a36ebd252       About a minute ago   Running             coredns                   0                   1d96eb581f035
	I0526 21:25:17.345440  527485 command_runner.go:124] > 5d3df8c94eaed       6e38f40d628db       About a minute ago   Running             storage-provisioner       0                   722b1b257c571
	I0526 21:25:17.345459  527485 command_runner.go:124] > 69df1859ce4d1       6de166512aa22       About a minute ago   Running             kindnet-cni               0                   53490c652b9e5
	I0526 21:25:17.345473  527485 command_runner.go:124] > de6efc6fec4b2       43154ddb57a83       About a minute ago   Running             kube-proxy                0                   038c42970362d
	I0526 21:25:17.345487  527485 command_runner.go:124] > c8538106e966b       0369cf4303ffd       About a minute ago   Running             etcd                      0                   2ad404c6a9c44
	I0526 21:25:17.345506  527485 command_runner.go:124] > e6bb9bee7539a       ed2c44fbdd78b       About a minute ago   Running             kube-scheduler            0                   24fd8b8599a6e
	I0526 21:25:17.345524  527485 command_runner.go:124] > 2314e41b1b443       a27166429d98e       About a minute ago   Running             kube-controller-manager   0                   73ada73fbbf0b
	I0526 21:25:17.345537  527485 command_runner.go:124] > a0581c0e5409b       a8c2fdb8bf76e       About a minute ago   Running             kube-apiserver            0                   fe43674906f20
	I0526 21:25:17.346897  527485 logs.go:123] Gathering logs for kube-apiserver [a0581c0e5409bf38a71c61d4cf776a7fcd3dc38c24a5a6a52165552afd2bc85c] ...
	I0526 21:25:17.346916  527485 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 a0581c0e5409bf38a71c61d4cf776a7fcd3dc38c24a5a6a52165552afd2bc85c"
	I0526 21:25:17.366204  527485 command_runner.go:124] ! Flag --insecure-port has been deprecated, This flag has no effect now and will be removed in v1.24.
	I0526 21:25:17.366224  527485 command_runner.go:124] ! I0526 21:23:29.805604       1 server.go:632] external host was not specified, using 192.168.39.229
	I0526 21:25:17.366231  527485 command_runner.go:124] ! I0526 21:23:29.806982       1 server.go:182] Version: v1.20.2
	I0526 21:25:17.366239  527485 command_runner.go:124] ! I0526 21:23:30.593640       1 shared_informer.go:240] Waiting for caches to sync for node_authorizer
	I0526 21:25:17.366258  527485 command_runner.go:124] ! I0526 21:23:30.598821       1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0526 21:25:17.366279  527485 command_runner.go:124] ! I0526 21:23:30.598945       1 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
	I0526 21:25:17.366315  527485 command_runner.go:124] ! I0526 21:23:30.600954       1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0526 21:25:17.366349  527485 command_runner.go:124] ! I0526 21:23:30.601309       1 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
	I0526 21:25:17.366363  527485 command_runner.go:124] ! I0526 21:23:30.616590       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.366374  527485 command_runner.go:124] ! I0526 21:23:30.617065       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.366384  527485 command_runner.go:124] ! I0526 21:23:30.995013       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.366409  527485 command_runner.go:124] ! I0526 21:23:30.995139       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.366422  527485 command_runner.go:124] ! I0526 21:23:31.030659       1 client.go:360] parsed scheme: "passthrough"
	I0526 21:25:17.366436  527485 command_runner.go:124] ! I0526 21:23:31.031231       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0526 21:25:17.366448  527485 command_runner.go:124] ! I0526 21:23:31.031324       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0526 21:25:17.366458  527485 command_runner.go:124] ! I0526 21:23:31.032369       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.366470  527485 command_runner.go:124] ! I0526 21:23:31.032725       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.366476  527485 command_runner.go:124] ! I0526 21:23:31.143094       1 instance.go:289] Using reconciler: lease
	I0526 21:25:17.366484  527485 command_runner.go:124] ! I0526 21:23:31.148814       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.366493  527485 command_runner.go:124] ! I0526 21:23:31.148936       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.366501  527485 command_runner.go:124] ! I0526 21:23:31.164327       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.366510  527485 command_runner.go:124] ! I0526 21:23:31.164627       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.366520  527485 command_runner.go:124] ! I0526 21:23:31.183831       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.366533  527485 command_runner.go:124] ! I0526 21:23:31.184185       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.366544  527485 command_runner.go:124] ! I0526 21:23:31.203621       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.366555  527485 command_runner.go:124] ! I0526 21:23:31.204140       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.366563  527485 command_runner.go:124] ! I0526 21:23:31.218608       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.366571  527485 command_runner.go:124] ! I0526 21:23:31.218929       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.366580  527485 command_runner.go:124] ! I0526 21:23:31.235670       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.366589  527485 command_runner.go:124] ! I0526 21:23:31.235780       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.366597  527485 command_runner.go:124] ! I0526 21:23:31.248767       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.366623  527485 command_runner.go:124] ! I0526 21:23:31.248973       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.366660  527485 command_runner.go:124] ! I0526 21:23:31.270717       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.366670  527485 command_runner.go:124] ! I0526 21:23:31.272045       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.366680  527485 command_runner.go:124] ! I0526 21:23:31.287807       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.366691  527485 command_runner.go:124] ! I0526 21:23:31.288158       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.366698  527485 command_runner.go:124] ! I0526 21:23:31.302175       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.366707  527485 command_runner.go:124] ! I0526 21:23:31.302294       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.366716  527485 command_runner.go:124] ! I0526 21:23:31.318788       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.366725  527485 command_runner.go:124] ! I0526 21:23:31.318898       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.366733  527485 command_runner.go:124] ! I0526 21:23:31.340681       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.366742  527485 command_runner.go:124] ! I0526 21:23:31.341103       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.366751  527485 command_runner.go:124] ! I0526 21:23:31.364875       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.366760  527485 command_runner.go:124] ! I0526 21:23:31.365260       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.366768  527485 command_runner.go:124] ! I0526 21:23:31.375229       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.366777  527485 command_runner.go:124] ! I0526 21:23:31.375353       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.366786  527485 command_runner.go:124] ! I0526 21:23:31.384385       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.366795  527485 command_runner.go:124] ! I0526 21:23:31.384585       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.366806  527485 command_runner.go:124] ! I0526 21:23:31.392770       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.366818  527485 command_runner.go:124] ! I0526 21:23:31.392939       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.366828  527485 command_runner.go:124] ! I0526 21:23:31.406398       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.366841  527485 command_runner.go:124] ! I0526 21:23:31.406589       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.366849  527485 command_runner.go:124] ! I0526 21:23:31.421828       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.366858  527485 command_runner.go:124] ! I0526 21:23:31.422392       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.366868  527485 command_runner.go:124] ! I0526 21:23:31.434772       1 rest.go:131] the default service ipfamily for this cluster is: IPv4
	I0526 21:25:17.366875  527485 command_runner.go:124] ! I0526 21:23:31.530123       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.366885  527485 command_runner.go:124] ! I0526 21:23:31.530234       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.366893  527485 command_runner.go:124] ! I0526 21:23:31.542917       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.366902  527485 command_runner.go:124] ! I0526 21:23:31.543258       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.366911  527485 command_runner.go:124] ! I0526 21:23:31.558871       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.366921  527485 command_runner.go:124] ! I0526 21:23:31.558975       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.366930  527485 command_runner.go:124] ! I0526 21:23:31.578311       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.366939  527485 command_runner.go:124] ! I0526 21:23:31.578428       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.366948  527485 command_runner.go:124] ! I0526 21:23:31.579212       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.366957  527485 command_runner.go:124] ! I0526 21:23:31.579406       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.366966  527485 command_runner.go:124] ! I0526 21:23:31.593279       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.366975  527485 command_runner.go:124] ! I0526 21:23:31.593392       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.366983  527485 command_runner.go:124] ! I0526 21:23:31.609260       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.366993  527485 command_runner.go:124] ! I0526 21:23:31.609368       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.367001  527485 command_runner.go:124] ! I0526 21:23:31.626851       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.367010  527485 command_runner.go:124] ! I0526 21:23:31.626960       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.367033  527485 command_runner.go:124] ! I0526 21:23:31.653023       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.367056  527485 command_runner.go:124] ! I0526 21:23:31.653138       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.367066  527485 command_runner.go:124] ! I0526 21:23:31.662951       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.367076  527485 command_runner.go:124] ! I0526 21:23:31.663349       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.367085  527485 command_runner.go:124] ! I0526 21:23:31.683106       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.367094  527485 command_runner.go:124] ! I0526 21:23:31.684613       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.367104  527485 command_runner.go:124] ! I0526 21:23:31.700741       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.367115  527485 command_runner.go:124] ! I0526 21:23:31.701266       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.367124  527485 command_runner.go:124] ! I0526 21:23:31.722045       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.367133  527485 command_runner.go:124] ! I0526 21:23:31.722235       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.367141  527485 command_runner.go:124] ! I0526 21:23:31.736295       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.367150  527485 command_runner.go:124] ! I0526 21:23:31.737071       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.367157  527485 command_runner.go:124] ! I0526 21:23:31.751086       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.367166  527485 command_runner.go:124] ! I0526 21:23:31.751202       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.367175  527485 command_runner.go:124] ! I0526 21:23:31.767941       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.367184  527485 command_runner.go:124] ! I0526 21:23:31.768045       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.367192  527485 command_runner.go:124] ! I0526 21:23:31.784917       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.367201  527485 command_runner.go:124] ! I0526 21:23:31.785029       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.367209  527485 command_runner.go:124] ! I0526 21:23:31.802204       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.367218  527485 command_runner.go:124] ! I0526 21:23:31.802314       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.367227  527485 command_runner.go:124] ! I0526 21:23:31.817427       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.367241  527485 command_runner.go:124] ! I0526 21:23:31.817616       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.367252  527485 command_runner.go:124] ! I0526 21:23:31.837841       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.367265  527485 command_runner.go:124] ! I0526 21:23:31.837939       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.367277  527485 command_runner.go:124] ! I0526 21:23:31.860217       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.367293  527485 command_runner.go:124] ! I0526 21:23:31.861221       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.367307  527485 command_runner.go:124] ! I0526 21:23:31.871254       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.367324  527485 command_runner.go:124] ! I0526 21:23:31.872836       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.367336  527485 command_runner.go:124] ! I0526 21:23:31.884052       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.367347  527485 command_runner.go:124] ! I0526 21:23:31.884160       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.367353  527485 command_runner.go:124] ! I0526 21:23:31.898818       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.367368  527485 command_runner.go:124] ! I0526 21:23:31.898925       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.367383  527485 command_runner.go:124] ! I0526 21:23:31.913046       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.367398  527485 command_runner.go:124] ! I0526 21:23:31.913149       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.367411  527485 command_runner.go:124] ! I0526 21:23:31.925884       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.367424  527485 command_runner.go:124] ! I0526 21:23:31.925994       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.367434  527485 command_runner.go:124] ! I0526 21:23:31.939143       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.367443  527485 command_runner.go:124] ! I0526 21:23:31.939253       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.367455  527485 command_runner.go:124] ! I0526 21:23:31.954393       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.367494  527485 command_runner.go:124] ! I0526 21:23:31.956005       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.367508  527485 command_runner.go:124] ! I0526 21:23:31.964255       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.367522  527485 command_runner.go:124] ! I0526 21:23:31.964369       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.367529  527485 command_runner.go:124] ! I0526 21:23:31.980824       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.367539  527485 command_runner.go:124] ! I0526 21:23:31.980931       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.367548  527485 command_runner.go:124] ! I0526 21:23:31.998875       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.367558  527485 command_runner.go:124] ! I0526 21:23:31.998978       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.367566  527485 command_runner.go:124] ! I0526 21:23:32.014057       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.367575  527485 command_runner.go:124] ! I0526 21:23:32.014169       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.367584  527485 command_runner.go:124] ! I0526 21:23:32.027301       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.367595  527485 command_runner.go:124] ! I0526 21:23:32.027633       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.367603  527485 command_runner.go:124] ! I0526 21:23:32.046160       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.367613  527485 command_runner.go:124] ! I0526 21:23:32.046890       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.367622  527485 command_runner.go:124] ! I0526 21:23:32.068538       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.367631  527485 command_runner.go:124] ! I0526 21:23:32.069814       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.367639  527485 command_runner.go:124] ! I0526 21:23:32.087119       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.367648  527485 command_runner.go:124] ! I0526 21:23:32.087547       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.367657  527485 command_runner.go:124] ! I0526 21:23:32.097832       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.367668  527485 command_runner.go:124] ! I0526 21:23:32.097940       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.367676  527485 command_runner.go:124] ! I0526 21:23:32.107249       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.367685  527485 command_runner.go:124] ! I0526 21:23:32.107932       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.367694  527485 command_runner.go:124] ! I0526 21:23:32.119796       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.367703  527485 command_runner.go:124] ! I0526 21:23:32.119897       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.367711  527485 command_runner.go:124] ! I0526 21:23:32.128209       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.367720  527485 command_runner.go:124] ! I0526 21:23:32.128321       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.367728  527485 command_runner.go:124] ! I0526 21:23:32.138008       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.367737  527485 command_runner.go:124] ! I0526 21:23:32.138111       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.367746  527485 command_runner.go:124] ! I0526 21:23:32.160727       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.367755  527485 command_runner.go:124] ! I0526 21:23:32.160833       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.367764  527485 command_runner.go:124] ! I0526 21:23:32.186843       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.367773  527485 command_runner.go:124] ! I0526 21:23:32.186949       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.367781  527485 command_runner.go:124] ! I0526 21:23:32.198121       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.367791  527485 command_runner.go:124] ! I0526 21:23:32.198232       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.367799  527485 command_runner.go:124] ! I0526 21:23:32.206015       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.367814  527485 command_runner.go:124] ! I0526 21:23:32.206127       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.367824  527485 command_runner.go:124] ! I0526 21:23:32.222761       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.367834  527485 command_runner.go:124] ! I0526 21:23:32.223204       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.367842  527485 command_runner.go:124] ! I0526 21:23:32.232528       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.367852  527485 command_runner.go:124] ! I0526 21:23:32.232629       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.367860  527485 command_runner.go:124] ! I0526 21:23:32.245897       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.367885  527485 command_runner.go:124] ! I0526 21:23:32.246007       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.367897  527485 command_runner.go:124] ! I0526 21:23:32.263847       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.367919  527485 command_runner.go:124] ! I0526 21:23:32.263950       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.367927  527485 command_runner.go:124] ! I0526 21:23:32.275996       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.367936  527485 command_runner.go:124] ! I0526 21:23:32.276100       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.367945  527485 command_runner.go:124] ! I0526 21:23:32.286992       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.367954  527485 command_runner.go:124] ! I0526 21:23:32.288760       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.367962  527485 command_runner.go:124] ! I0526 21:23:32.300558       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.367971  527485 command_runner.go:124] ! I0526 21:23:32.300656       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.367982  527485 command_runner.go:124] ! W0526 21:23:32.466350       1 genericapiserver.go:419] Skipping API batch/v2alpha1 because it has no resources.
	I0526 21:25:17.367993  527485 command_runner.go:124] ! W0526 21:23:32.475974       1 genericapiserver.go:419] Skipping API discovery.k8s.io/v1alpha1 because it has no resources.
	I0526 21:25:17.368001  527485 command_runner.go:124] ! W0526 21:23:32.486620       1 genericapiserver.go:419] Skipping API node.k8s.io/v1alpha1 because it has no resources.
	I0526 21:25:17.368012  527485 command_runner.go:124] ! W0526 21:23:32.495038       1 genericapiserver.go:419] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
	I0526 21:25:17.368023  527485 command_runner.go:124] ! W0526 21:23:32.498634       1 genericapiserver.go:419] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
	I0526 21:25:17.368034  527485 command_runner.go:124] ! W0526 21:23:32.503834       1 genericapiserver.go:419] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
	I0526 21:25:17.368044  527485 command_runner.go:124] ! W0526 21:23:32.506839       1 genericapiserver.go:419] Skipping API flowcontrol.apiserver.k8s.io/v1alpha1 because it has no resources.
	I0526 21:25:17.368054  527485 command_runner.go:124] ! W0526 21:23:32.511920       1 genericapiserver.go:419] Skipping API apps/v1beta2 because it has no resources.
	I0526 21:25:17.368064  527485 command_runner.go:124] ! W0526 21:23:32.512155       1 genericapiserver.go:419] Skipping API apps/v1beta1 because it has no resources.
	I0526 21:25:17.368083  527485 command_runner.go:124] ! I0526 21:23:32.520325       1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0526 21:25:17.368106  527485 command_runner.go:124] ! I0526 21:23:32.520699       1 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
	I0526 21:25:17.368116  527485 command_runner.go:124] ! I0526 21:23:32.522294       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.368127  527485 command_runner.go:124] ! I0526 21:23:32.522675       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.368136  527485 command_runner.go:124] ! I0526 21:23:32.531035       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.368145  527485 command_runner.go:124] ! I0526 21:23:32.531144       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.368157  527485 command_runner.go:124] ! I0526 21:23:34.690784       1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
	I0526 21:25:17.368166  527485 command_runner.go:124] ! I0526 21:23:34.691285       1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	I0526 21:25:17.368178  527485 command_runner.go:124] ! I0526 21:23:34.692130       1 dynamic_serving_content.go:130] Starting serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key
	I0526 21:25:17.368188  527485 command_runner.go:124] ! I0526 21:23:34.692740       1 secure_serving.go:197] Serving securely on [::]:8443
	I0526 21:25:17.368196  527485 command_runner.go:124] ! I0526 21:23:34.693343       1 apf_controller.go:261] Starting API Priority and Fairness config controller
	I0526 21:25:17.368204  527485 command_runner.go:124] ! I0526 21:23:34.693677       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I0526 21:25:17.368215  527485 command_runner.go:124] ! I0526 21:23:34.694744       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
	I0526 21:25:17.368224  527485 command_runner.go:124] ! I0526 21:23:34.694836       1 shared_informer.go:240] Waiting for caches to sync for cluster_authentication_trust_controller
	I0526 21:25:17.368247  527485 command_runner.go:124] ! I0526 21:23:34.694880       1 available_controller.go:475] Starting AvailableConditionController
	I0526 21:25:17.368258  527485 command_runner.go:124] ! I0526 21:23:34.694885       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
	I0526 21:25:17.368265  527485 command_runner.go:124] ! I0526 21:23:34.694904       1 autoregister_controller.go:141] Starting autoregister controller
	I0526 21:25:17.368274  527485 command_runner.go:124] ! I0526 21:23:34.694908       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0526 21:25:17.368281  527485 command_runner.go:124] ! I0526 21:23:34.696887       1 apiservice_controller.go:97] Starting APIServiceRegistrationController
	I0526 21:25:17.368292  527485 command_runner.go:124] ! I0526 21:23:34.697053       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I0526 21:25:17.368305  527485 command_runner.go:124] ! I0526 21:23:34.697670       1 dynamic_serving_content.go:130] Starting aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key
	I0526 21:25:17.368314  527485 command_runner.go:124] ! I0526 21:23:34.697935       1 controller.go:83] Starting OpenAPI AggregationController
	I0526 21:25:17.368323  527485 command_runner.go:124] ! I0526 21:23:34.698627       1 customresource_discovery_controller.go:209] Starting DiscoveryController
	I0526 21:25:17.368334  527485 command_runner.go:124] ! I0526 21:23:34.705120       1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	I0526 21:25:17.368344  527485 command_runner.go:124] ! I0526 21:23:34.705289       1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
	I0526 21:25:17.368354  527485 command_runner.go:124] ! I0526 21:23:34.706119       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0526 21:25:17.368362  527485 command_runner.go:124] ! I0526 21:23:34.706246       1 shared_informer.go:240] Waiting for caches to sync for crd-autoregister
	I0526 21:25:17.368376  527485 command_runner.go:124] ! E0526 21:23:34.733148       1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.39.229, ResourceVersion: 0, AdditionalErrorMsg: 
	I0526 21:25:17.368387  527485 command_runner.go:124] ! I0526 21:23:34.762565       1 controller.go:86] Starting OpenAPI controller
	I0526 21:25:17.368398  527485 command_runner.go:124] ! I0526 21:23:34.762983       1 naming_controller.go:291] Starting NamingConditionController
	I0526 21:25:17.368413  527485 command_runner.go:124] ! I0526 21:23:34.763230       1 establishing_controller.go:76] Starting EstablishingController
	I0526 21:25:17.368428  527485 command_runner.go:124] ! I0526 21:23:34.763815       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0526 21:25:17.368441  527485 command_runner.go:124] ! I0526 21:23:34.764676       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0526 21:25:17.368451  527485 command_runner.go:124] ! I0526 21:23:34.765003       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0526 21:25:17.368458  527485 command_runner.go:124] ! I0526 21:23:34.894833       1 shared_informer.go:247] Caches are synced for node_authorizer 
	I0526 21:25:17.368467  527485 command_runner.go:124] ! I0526 21:23:34.895159       1 cache.go:39] Caches are synced for autoregister controller
	I0526 21:25:17.368477  527485 command_runner.go:124] ! I0526 21:23:34.895543       1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller 
	I0526 21:25:17.368486  527485 command_runner.go:124] ! I0526 21:23:34.895893       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0526 21:25:17.368496  527485 command_runner.go:124] ! I0526 21:23:34.897085       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0526 21:25:17.368504  527485 command_runner.go:124] ! I0526 21:23:34.899871       1 apf_controller.go:266] Running API Priority and Fairness config worker
	I0526 21:25:17.368513  527485 command_runner.go:124] ! I0526 21:23:34.907242       1 shared_informer.go:247] Caches are synced for crd-autoregister 
	I0526 21:25:17.368524  527485 command_runner.go:124] ! I0526 21:23:35.022751       1 controller.go:609] quota admission added evaluator for: namespaces
	I0526 21:25:17.368541  527485 command_runner.go:124] ! I0526 21:23:35.690855       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0526 21:25:17.368560  527485 command_runner.go:124] ! I0526 21:23:35.691097       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0526 21:25:17.368580  527485 command_runner.go:124] ! I0526 21:23:35.708402       1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000
	I0526 21:25:17.368598  527485 command_runner.go:124] ! I0526 21:23:35.726885       1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
	I0526 21:25:17.368613  527485 command_runner.go:124] ! I0526 21:23:35.727088       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I0526 21:25:17.368625  527485 command_runner.go:124] ! I0526 21:23:36.334571       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0526 21:25:17.368637  527485 command_runner.go:124] ! I0526 21:23:36.389004       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0526 21:25:17.368646  527485 command_runner.go:124] ! W0526 21:23:36.485873       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.39.229]
	I0526 21:25:17.368658  527485 command_runner.go:124] ! I0526 21:23:36.487435       1 controller.go:609] quota admission added evaluator for: endpoints
	I0526 21:25:17.368668  527485 command_runner.go:124] ! I0526 21:23:36.499209       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0526 21:25:17.368678  527485 command_runner.go:124] ! I0526 21:23:37.294654       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I0526 21:25:17.368686  527485 command_runner.go:124] ! I0526 21:23:38.382157       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I0526 21:25:17.368695  527485 command_runner.go:124] ! I0526 21:23:38.454712       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I0526 21:25:17.368705  527485 command_runner.go:124] ! I0526 21:23:43.955877       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I0526 21:25:17.368713  527485 command_runner.go:124] ! I0526 21:23:53.285833       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I0526 21:25:17.368723  527485 command_runner.go:124] ! I0526 21:23:53.338274       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I0526 21:25:17.368729  527485 command_runner.go:124] ! I0526 21:24:01.973387       1 client.go:360] parsed scheme: "passthrough"
	I0526 21:25:17.368740  527485 command_runner.go:124] ! I0526 21:24:01.973608       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0526 21:25:17.368750  527485 command_runner.go:124] ! I0526 21:24:01.973627       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0526 21:25:17.368758  527485 command_runner.go:124] ! I0526 21:24:43.497572       1 client.go:360] parsed scheme: "passthrough"
	I0526 21:25:17.368770  527485 command_runner.go:124] ! I0526 21:24:43.497775       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0526 21:25:17.368779  527485 command_runner.go:124] ! I0526 21:24:43.498072       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0526 21:25:17.379028  527485 logs.go:123] Gathering logs for coredns [a9593dff4428d4d0f5cc31832cefb42bf39b62a98d32d295dd9502a1ef6d307a] ...
	I0526 21:25:17.379043  527485 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 a9593dff4428d4d0f5cc31832cefb42bf39b62a98d32d295dd9502a1ef6d307a"
	I0526 21:25:17.398421  527485 command_runner.go:124] > .:53
	I0526 21:25:17.398439  527485 command_runner.go:124] > [INFO] plugin/reload: Running configuration MD5 = 8f51b271a18f2ce6fcaee5f1cfda3ed0
	I0526 21:25:17.398444  527485 command_runner.go:124] > CoreDNS-1.7.0
	I0526 21:25:17.398451  527485 command_runner.go:124] > linux/amd64, go1.14.4, f59c03d
	I0526 21:25:17.398533  527485 logs.go:123] Gathering logs for kube-scheduler [e6bb9bee7539aa8f41eb665706dd077c17b8c28a54090cf4fffe439978194c08] ...
	I0526 21:25:17.398545  527485 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 e6bb9bee7539aa8f41eb665706dd077c17b8c28a54090cf4fffe439978194c08"
	I0526 21:25:17.416827  527485 command_runner.go:124] ! I0526 21:23:31.228401       1 serving.go:331] Generated self-signed cert in-memory
	I0526 21:25:17.416933  527485 command_runner.go:124] ! W0526 21:23:34.792981       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0526 21:25:17.417081  527485 command_runner.go:124] ! W0526 21:23:34.795544       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0526 21:25:17.417193  527485 command_runner.go:124] ! W0526 21:23:34.796410       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0526 21:25:17.417287  527485 command_runner.go:124] ! W0526 21:23:34.796897       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0526 21:25:17.417561  527485 command_runner.go:124] ! I0526 21:23:34.861412       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0526 21:25:17.417657  527485 command_runner.go:124] ! I0526 21:23:34.862415       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0526 21:25:17.417725  527485 command_runner.go:124] ! I0526 21:23:34.861578       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0526 21:25:17.418064  527485 command_runner.go:124] ! I0526 21:23:34.861594       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I0526 21:25:17.418159  527485 command_runner.go:124] ! E0526 21:23:34.865256       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0526 21:25:17.418425  527485 command_runner.go:124] ! E0526 21:23:34.871182       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0526 21:25:17.418500  527485 command_runner.go:124] ! E0526 21:23:34.871367       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0526 21:25:17.418601  527485 command_runner.go:124] ! E0526 21:23:34.871423       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0526 21:25:17.418846  527485 command_runner.go:124] ! E0526 21:23:34.873602       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0526 21:25:17.418951  527485 command_runner.go:124] ! E0526 21:23:34.873877       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0526 21:25:17.419299  527485 command_runner.go:124] ! E0526 21:23:34.874313       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0526 21:25:17.419377  527485 command_runner.go:124] ! E0526 21:23:34.874540       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0526 21:25:17.419475  527485 command_runner.go:124] ! E0526 21:23:34.875162       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0526 21:25:17.419623  527485 command_runner.go:124] ! E0526 21:23:34.875282       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0526 21:25:17.419694  527485 command_runner.go:124] ! E0526 21:23:34.878224       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0526 21:25:17.420075  527485 command_runner.go:124] ! E0526 21:23:34.878386       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0526 21:25:17.420144  527485 command_runner.go:124] ! E0526 21:23:35.699206       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0526 21:25:17.420398  527485 command_runner.go:124] ! E0526 21:23:35.756603       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0526 21:25:17.420484  527485 command_runner.go:124] ! E0526 21:23:35.804897       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0526 21:25:17.420586  527485 command_runner.go:124] ! E0526 21:23:35.812802       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0526 21:25:17.420682  527485 command_runner.go:124] ! E0526 21:23:35.981887       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0526 21:25:17.421022  527485 command_runner.go:124] ! E0526 21:23:36.079577       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0526 21:25:17.421078  527485 command_runner.go:124] ! I0526 21:23:38.862952       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	I0526 21:25:17.424942  527485 logs.go:123] Gathering logs for kube-controller-manager [2314e41b1b44395e7b5a7c32d6859941d4a5db601171f0dd18ddefb571d25f18] ...
	I0526 21:25:17.424956  527485 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 2314e41b1b44395e7b5a7c32d6859941d4a5db601171f0dd18ddefb571d25f18"
	I0526 21:25:17.444078  527485 command_runner.go:124] ! Flag --port has been deprecated, see --secure-port instead.
	I0526 21:25:17.444154  527485 command_runner.go:124] ! I0526 21:23:30.770698       1 serving.go:331] Generated self-signed cert in-memory
	I0526 21:25:17.444506  527485 command_runner.go:124] ! I0526 21:23:31.105740       1 controllermanager.go:176] Version: v1.20.2
	I0526 21:25:17.444643  527485 command_runner.go:124] ! I0526 21:23:31.110528       1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
	I0526 21:25:17.444783  527485 command_runner.go:124] ! I0526 21:23:31.110685       1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	I0526 21:25:17.445172  527485 command_runner.go:124] ! I0526 21:23:31.111406       1 secure_serving.go:197] Serving securely on 127.0.0.1:10257
	I0526 21:25:17.445292  527485 command_runner.go:124] ! I0526 21:23:31.111685       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I0526 21:25:17.445688  527485 command_runner.go:124] ! I0526 21:23:37.283320       1 shared_informer.go:240] Waiting for caches to sync for tokens
	I0526 21:25:17.445922  527485 command_runner.go:124] ! I0526 21:23:37.384858       1 shared_informer.go:247] Caches are synced for tokens 
	I0526 21:25:17.446060  527485 command_runner.go:124] ! I0526 21:23:37.398260       1 controllermanager.go:554] Started "csrcleaner"
	I0526 21:25:17.446406  527485 command_runner.go:124] ! I0526 21:23:37.398681       1 cleaner.go:82] Starting CSR cleaner controller
	I0526 21:25:17.446539  527485 command_runner.go:124] ! I0526 21:23:37.436326       1 controllermanager.go:554] Started "tokencleaner"
	I0526 21:25:17.446909  527485 command_runner.go:124] ! I0526 21:23:37.436948       1 tokencleaner.go:118] Starting token cleaner controller
	I0526 21:25:17.447031  527485 command_runner.go:124] ! I0526 21:23:37.437051       1 shared_informer.go:240] Waiting for caches to sync for token_cleaner
	I0526 21:25:17.447231  527485 command_runner.go:124] ! I0526 21:23:37.437060       1 shared_informer.go:247] Caches are synced for token_cleaner 
	I0526 21:25:17.447450  527485 command_runner.go:124] ! E0526 21:23:37.458692       1 core.go:92] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail
	I0526 21:25:17.447548  527485 command_runner.go:124] ! W0526 21:23:37.458788       1 controllermanager.go:546] Skipping "service"
	I0526 21:25:17.447998  527485 command_runner.go:124] ! I0526 21:23:37.485897       1 controllermanager.go:554] Started "root-ca-cert-publisher"
	I0526 21:25:17.448078  527485 command_runner.go:124] ! W0526 21:23:37.486148       1 controllermanager.go:546] Skipping "ephemeral-volume"
	I0526 21:25:17.448448  527485 command_runner.go:124] ! I0526 21:23:37.486971       1 publisher.go:98] Starting root CA certificate configmap publisher
	I0526 21:25:17.448545  527485 command_runner.go:124] ! I0526 21:23:37.487325       1 shared_informer.go:240] Waiting for caches to sync for crt configmap
	I0526 21:25:17.449122  527485 command_runner.go:124] ! I0526 21:23:37.514186       1 controllermanager.go:554] Started "endpointslicemirroring"
	I0526 21:25:17.449148  527485 command_runner.go:124] ! I0526 21:23:37.515190       1 endpointslicemirroring_controller.go:211] Starting EndpointSliceMirroring controller
	I0526 21:25:17.449162  527485 command_runner.go:124] ! I0526 21:23:37.515570       1 shared_informer.go:240] Waiting for caches to sync for endpoint_slice_mirroring
	I0526 21:25:17.449180  527485 command_runner.go:124] ! I0526 21:23:37.550580       1 controllermanager.go:554] Started "replicaset"
	I0526 21:25:17.449192  527485 command_runner.go:124] ! I0526 21:23:37.551218       1 replica_set.go:182] Starting replicaset controller
	I0526 21:25:17.449208  527485 command_runner.go:124] ! I0526 21:23:37.551414       1 shared_informer.go:240] Waiting for caches to sync for ReplicaSet
	I0526 21:25:17.449224  527485 command_runner.go:124] ! I0526 21:23:37.987267       1 controllermanager.go:554] Started "horizontalpodautoscaling"
	I0526 21:25:17.449238  527485 command_runner.go:124] ! I0526 21:23:37.988181       1 horizontal.go:169] Starting HPA controller
	I0526 21:25:17.449254  527485 command_runner.go:124] ! I0526 21:23:37.988418       1 shared_informer.go:240] Waiting for caches to sync for HPA
	I0526 21:25:17.449276  527485 command_runner.go:124] ! I0526 21:23:38.238507       1 controllermanager.go:554] Started "persistentvolume-binder"
	I0526 21:25:17.449289  527485 command_runner.go:124] ! I0526 21:23:38.238941       1 pv_controller_base.go:307] Starting persistent volume controller
	I0526 21:25:17.449306  527485 command_runner.go:124] ! I0526 21:23:38.238953       1 shared_informer.go:240] Waiting for caches to sync for persistent volume
	I0526 21:25:17.449349  527485 command_runner.go:124] ! I0526 21:23:38.636899       1 controllermanager.go:554] Started "garbagecollector"
	I0526 21:25:17.449367  527485 command_runner.go:124] ! I0526 21:23:38.636902       1 garbagecollector.go:142] Starting garbage collector controller
	I0526 21:25:17.449380  527485 command_runner.go:124] ! I0526 21:23:38.636960       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	I0526 21:25:17.449394  527485 command_runner.go:124] ! I0526 21:23:38.637525       1 graph_builder.go:289] GraphBuilder running
	I0526 21:25:17.449405  527485 command_runner.go:124] ! I0526 21:23:39.037283       1 controllermanager.go:554] Started "disruption"
	I0526 21:25:17.449421  527485 command_runner.go:124] ! I0526 21:23:39.037574       1 disruption.go:331] Starting disruption controller
	I0526 21:25:17.449437  527485 command_runner.go:124] ! I0526 21:23:39.037585       1 shared_informer.go:240] Waiting for caches to sync for disruption
	I0526 21:25:17.449453  527485 command_runner.go:124] ! I0526 21:23:39.286540       1 controllermanager.go:554] Started "clusterrole-aggregation"
	I0526 21:25:17.449470  527485 command_runner.go:124] ! I0526 21:23:39.286598       1 clusterroleaggregation_controller.go:149] Starting ClusterRoleAggregator
	I0526 21:25:17.449490  527485 command_runner.go:124] ! I0526 21:23:39.286605       1 shared_informer.go:240] Waiting for caches to sync for ClusterRoleAggregator
	I0526 21:25:17.449506  527485 command_runner.go:124] ! I0526 21:23:39.537304       1 controllermanager.go:554] Started "pvc-protection"
	I0526 21:25:17.449522  527485 command_runner.go:124] ! I0526 21:23:39.537579       1 pvc_protection_controller.go:110] Starting PVC protection controller
	I0526 21:25:17.449540  527485 command_runner.go:124] ! I0526 21:23:39.537670       1 shared_informer.go:240] Waiting for caches to sync for PVC protection
	I0526 21:25:17.449556  527485 command_runner.go:124] ! I0526 21:23:39.786982       1 controllermanager.go:554] Started "pv-protection"
	I0526 21:25:17.449572  527485 command_runner.go:124] ! I0526 21:23:39.787110       1 pv_protection_controller.go:83] Starting PV protection controller
	I0526 21:25:17.449586  527485 command_runner.go:124] ! I0526 21:23:39.787118       1 shared_informer.go:240] Waiting for caches to sync for PV protection
	I0526 21:25:17.449601  527485 command_runner.go:124] ! I0526 21:23:40.036383       1 controllermanager.go:554] Started "endpoint"
	I0526 21:25:17.449614  527485 command_runner.go:124] ! I0526 21:23:40.036415       1 endpoints_controller.go:184] Starting endpoint controller
	I0526 21:25:17.449629  527485 command_runner.go:124] ! I0526 21:23:40.037058       1 shared_informer.go:240] Waiting for caches to sync for endpoint
	I0526 21:25:17.449648  527485 command_runner.go:124] ! I0526 21:23:40.288607       1 controllermanager.go:554] Started "podgc"
	I0526 21:25:17.449663  527485 command_runner.go:124] ! I0526 21:23:40.288827       1 gc_controller.go:89] Starting GC controller
	I0526 21:25:17.449678  527485 command_runner.go:124] ! I0526 21:23:40.289411       1 shared_informer.go:240] Waiting for caches to sync for GC
	I0526 21:25:17.449702  527485 command_runner.go:124] ! W0526 21:23:40.988861       1 shared_informer.go:494] resyncPeriod 13h30m7.5724073s is smaller than resyncCheckPeriod 19h40m47.70464655s and the informer has already started. Changing it to 19h40m47.70464655s
	I0526 21:25:17.449720  527485 command_runner.go:124] ! I0526 21:23:40.989960       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for serviceaccounts
	I0526 21:25:17.449741  527485 command_runner.go:124] ! I0526 21:23:40.990215       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for daemonsets.apps
	I0526 21:25:17.449760  527485 command_runner.go:124] ! I0526 21:23:40.990426       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for cronjobs.batch
	I0526 21:25:17.449781  527485 command_runner.go:124] ! I0526 21:23:40.990971       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for rolebindings.rbac.authorization.k8s.io
	I0526 21:25:17.449802  527485 command_runner.go:124] ! I0526 21:23:40.991569       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for horizontalpodautoscalers.autoscaling
	I0526 21:25:17.449822  527485 command_runner.go:124] ! I0526 21:23:40.991963       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for poddisruptionbudgets.policy
	I0526 21:25:17.449840  527485 command_runner.go:124] ! I0526 21:23:40.992141       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for jobs.batch
	I0526 21:25:17.449860  527485 command_runner.go:124] ! I0526 21:23:40.992301       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for endpointslices.discovery.k8s.io
	I0526 21:25:17.449879  527485 command_runner.go:124] ! I0526 21:23:40.992532       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for podtemplates
	I0526 21:25:17.449903  527485 command_runner.go:124] ! W0526 21:23:40.992690       1 shared_informer.go:494] resyncPeriod 13h37m25.694603534s is smaller than resyncCheckPeriod 19h40m47.70464655s and the informer has already started. Changing it to 19h40m47.70464655s
	I0526 21:25:17.449923  527485 command_runner.go:124] ! I0526 21:23:40.993075       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for controllerrevisions.apps
	I0526 21:25:17.449943  527485 command_runner.go:124] ! I0526 21:23:40.993243       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for networkpolicies.networking.k8s.io
	I0526 21:25:17.449962  527485 command_runner.go:124] ! I0526 21:23:40.993580       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for limitranges
	I0526 21:25:17.449981  527485 command_runner.go:124] ! I0526 21:23:40.993747       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for ingresses.networking.k8s.io
	I0526 21:25:17.450033  527485 command_runner.go:124] ! I0526 21:23:40.993780       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for ingresses.extensions
	I0526 21:25:17.450054  527485 command_runner.go:124] ! I0526 21:23:40.993805       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for leases.coordination.k8s.io
	I0526 21:25:17.450072  527485 command_runner.go:124] ! I0526 21:23:40.993841       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for statefulsets.apps
	I0526 21:25:17.450091  527485 command_runner.go:124] ! I0526 21:23:40.993861       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for replicasets.apps
	I0526 21:25:17.450109  527485 command_runner.go:124] ! I0526 21:23:40.993876       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for deployments.apps
	I0526 21:25:17.450127  527485 command_runner.go:124] ! I0526 21:23:40.993891       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for endpoints
	I0526 21:25:17.450145  527485 command_runner.go:124] ! I0526 21:23:40.993951       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for events.events.k8s.io
	I0526 21:25:17.450163  527485 command_runner.go:124] ! I0526 21:23:40.993980       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for roles.rbac.authorization.k8s.io
	I0526 21:25:17.450178  527485 command_runner.go:124] ! I0526 21:23:40.994082       1 controllermanager.go:554] Started "resourcequota"
	I0526 21:25:17.450210  527485 command_runner.go:124] ! I0526 21:23:40.994178       1 resource_quota_controller.go:273] Starting resource quota controller
	I0526 21:25:17.450227  527485 command_runner.go:124] ! I0526 21:23:40.994191       1 shared_informer.go:240] Waiting for caches to sync for resource quota
	I0526 21:25:17.450242  527485 command_runner.go:124] ! I0526 21:23:40.994219       1 resource_quota_monitor.go:304] QuotaMonitor running
	I0526 21:25:17.450257  527485 command_runner.go:124] ! I0526 21:23:41.028175       1 controllermanager.go:554] Started "namespace"
	I0526 21:25:17.450278  527485 command_runner.go:124] ! I0526 21:23:41.028716       1 namespace_controller.go:200] Starting namespace controller
	I0526 21:25:17.450294  527485 command_runner.go:124] ! I0526 21:23:41.028992       1 shared_informer.go:240] Waiting for caches to sync for namespace
	I0526 21:25:17.450308  527485 command_runner.go:124] ! I0526 21:23:41.051981       1 controllermanager.go:554] Started "ttl"
	I0526 21:25:17.450323  527485 command_runner.go:124] ! I0526 21:23:41.052926       1 ttl_controller.go:121] Starting TTL controller
	I0526 21:25:17.450338  527485 command_runner.go:124] ! I0526 21:23:41.053383       1 shared_informer.go:240] Waiting for caches to sync for TTL
	I0526 21:25:17.450353  527485 command_runner.go:124] ! I0526 21:23:41.289145       1 controllermanager.go:554] Started "attachdetach"
	I0526 21:25:17.450369  527485 command_runner.go:124] ! W0526 21:23:41.289246       1 controllermanager.go:546] Skipping "ttl-after-finished"
	I0526 21:25:17.450386  527485 command_runner.go:124] ! I0526 21:23:41.289282       1 attach_detach_controller.go:328] Starting attach detach controller
	I0526 21:25:17.450406  527485 command_runner.go:124] ! I0526 21:23:41.289291       1 shared_informer.go:240] Waiting for caches to sync for attach detach
	I0526 21:25:17.450421  527485 command_runner.go:124] ! I0526 21:23:41.537362       1 controllermanager.go:554] Started "serviceaccount"
	I0526 21:25:17.450438  527485 command_runner.go:124] ! I0526 21:23:41.537403       1 serviceaccounts_controller.go:117] Starting service account controller
	I0526 21:25:17.450454  527485 command_runner.go:124] ! I0526 21:23:41.538137       1 shared_informer.go:240] Waiting for caches to sync for service account
	I0526 21:25:17.450469  527485 command_runner.go:124] ! I0526 21:23:41.787243       1 controllermanager.go:554] Started "deployment"
	I0526 21:25:17.450484  527485 command_runner.go:124] ! I0526 21:23:41.788023       1 deployment_controller.go:153] Starting deployment controller
	I0526 21:25:17.450500  527485 command_runner.go:124] ! I0526 21:23:41.790417       1 shared_informer.go:240] Waiting for caches to sync for deployment
	I0526 21:25:17.450515  527485 command_runner.go:124] ! I0526 21:23:41.936235       1 controllermanager.go:554] Started "csrapproving"
	I0526 21:25:17.450532  527485 command_runner.go:124] ! I0526 21:23:41.936293       1 certificate_controller.go:118] Starting certificate controller "csrapproving"
	I0526 21:25:17.450550  527485 command_runner.go:124] ! I0526 21:23:41.936301       1 shared_informer.go:240] Waiting for caches to sync for certificate-csrapproving
	I0526 21:25:17.450570  527485 command_runner.go:124] ! I0526 21:23:42.137381       1 request.go:655] Throttling request took 1.048213324s, request: GET:https://192.168.39.229:8443/apis/extensions/v1beta1?timeout=32s
	I0526 21:25:17.450586  527485 command_runner.go:124] ! I0526 21:23:42.189224       1 node_ipam_controller.go:91] Sending events to api server.
	I0526 21:25:17.450601  527485 command_runner.go:124] ! I0526 21:23:52.210125       1 range_allocator.go:82] Sending events to api server.
	I0526 21:25:17.450621  527485 command_runner.go:124] ! I0526 21:23:52.211056       1 range_allocator.go:116] No Secondary Service CIDR provided. Skipping filtering out secondary service addresses.
	I0526 21:25:17.450636  527485 command_runner.go:124] ! I0526 21:23:52.211333       1 controllermanager.go:554] Started "nodeipam"
	I0526 21:25:17.450657  527485 command_runner.go:124] ! W0526 21:23:52.211708       1 core.go:246] configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes.
	I0526 21:25:17.450671  527485 command_runner.go:124] ! W0526 21:23:52.212021       1 controllermanager.go:546] Skipping "route"
	I0526 21:25:17.450686  527485 command_runner.go:124] ! I0526 21:23:52.212292       1 node_ipam_controller.go:159] Starting ipam controller
	I0526 21:25:17.450701  527485 command_runner.go:124] ! I0526 21:23:52.212876       1 shared_informer.go:240] Waiting for caches to sync for node
	I0526 21:25:17.450753  527485 command_runner.go:124] ! I0526 21:23:52.227871       1 node_lifecycle_controller.go:77] Sending events to api server
	I0526 21:25:17.450810  527485 command_runner.go:124] ! E0526 21:23:52.227991       1 core.go:232] failed to start cloud node lifecycle controller: no cloud provider provided
	I0526 21:25:17.450823  527485 command_runner.go:124] ! W0526 21:23:52.228003       1 controllermanager.go:546] Skipping "cloud-node-lifecycle"
	I0526 21:25:17.450839  527485 command_runner.go:124] ! I0526 21:23:52.257128       1 controllermanager.go:554] Started "persistentvolume-expander"
	I0526 21:25:17.450858  527485 command_runner.go:124] ! I0526 21:23:52.257967       1 expand_controller.go:310] Starting expand controller
	I0526 21:25:17.450874  527485 command_runner.go:124] ! I0526 21:23:52.258344       1 shared_informer.go:240] Waiting for caches to sync for expand
	I0526 21:25:17.450890  527485 command_runner.go:124] ! I0526 21:23:52.287731       1 controllermanager.go:554] Started "endpointslice"
	I0526 21:25:17.450907  527485 command_runner.go:124] ! I0526 21:23:52.287941       1 endpointslice_controller.go:237] Starting endpoint slice controller
	I0526 21:25:17.450923  527485 command_runner.go:124] ! I0526 21:23:52.287950       1 shared_informer.go:240] Waiting for caches to sync for endpoint_slice
	I0526 21:25:17.450936  527485 command_runner.go:124] ! I0526 21:23:52.334629       1 controllermanager.go:554] Started "daemonset"
	I0526 21:25:17.450952  527485 command_runner.go:124] ! I0526 21:23:52.334789       1 daemon_controller.go:285] Starting daemon sets controller
	I0526 21:25:17.450967  527485 command_runner.go:124] ! I0526 21:23:52.334797       1 shared_informer.go:240] Waiting for caches to sync for daemon sets
	I0526 21:25:17.450980  527485 command_runner.go:124] ! I0526 21:23:52.366633       1 controllermanager.go:554] Started "statefulset"
	I0526 21:25:17.450997  527485 command_runner.go:124] ! I0526 21:23:52.366920       1 stateful_set.go:146] Starting stateful set controller
	I0526 21:25:17.451014  527485 command_runner.go:124] ! I0526 21:23:52.367009       1 shared_informer.go:240] Waiting for caches to sync for stateful set
	I0526 21:25:17.451029  527485 command_runner.go:124] ! I0526 21:23:52.395670       1 controllermanager.go:554] Started "cronjob"
	I0526 21:25:17.451044  527485 command_runner.go:124] ! I0526 21:23:52.395842       1 cronjob_controller.go:96] Starting CronJob Manager
	I0526 21:25:17.451061  527485 command_runner.go:124] ! I0526 21:23:52.416080       1 certificate_controller.go:118] Starting certificate controller "csrsigning-kubelet-serving"
	I0526 21:25:17.451078  527485 command_runner.go:124] ! I0526 21:23:52.416256       1 shared_informer.go:240] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0526 21:25:17.451098  527485 command_runner.go:124] ! I0526 21:23:52.416385       1 dynamic_serving_content.go:130] Starting csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key
	I0526 21:25:17.451120  527485 command_runner.go:124] ! I0526 21:23:52.416862       1 certificate_controller.go:118] Starting certificate controller "csrsigning-kubelet-client"
	I0526 21:25:17.451140  527485 command_runner.go:124] ! I0526 21:23:52.416958       1 shared_informer.go:240] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0526 21:25:17.451160  527485 command_runner.go:124] ! I0526 21:23:52.416975       1 dynamic_serving_content.go:130] Starting csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key
	I0526 21:25:17.451178  527485 command_runner.go:124] ! I0526 21:23:52.417715       1 certificate_controller.go:118] Starting certificate controller "csrsigning-kube-apiserver-client"
	I0526 21:25:17.451196  527485 command_runner.go:124] ! I0526 21:23:52.417882       1 shared_informer.go:240] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0526 21:25:17.451215  527485 command_runner.go:124] ! I0526 21:23:52.418025       1 dynamic_serving_content.go:130] Starting csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key
	I0526 21:25:17.451229  527485 command_runner.go:124] ! I0526 21:23:52.418373       1 controllermanager.go:554] Started "csrsigning"
	I0526 21:25:17.451243  527485 command_runner.go:124] ! I0526 21:23:52.418419       1 certificate_controller.go:118] Starting certificate controller "csrsigning-legacy-unknown"
	I0526 21:25:17.451268  527485 command_runner.go:124] ! I0526 21:23:52.418799       1 dynamic_serving_content.go:130] Starting csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key
	I0526 21:25:17.451286  527485 command_runner.go:124] ! I0526 21:23:52.418805       1 shared_informer.go:240] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0526 21:25:17.451302  527485 command_runner.go:124] ! I0526 21:23:52.515732       1 controllermanager.go:554] Started "bootstrapsigner"
	I0526 21:25:17.451318  527485 command_runner.go:124] ! I0526 21:23:52.516431       1 shared_informer.go:240] Waiting for caches to sync for bootstrap_signer
	I0526 21:25:17.451333  527485 command_runner.go:124] ! I0526 21:23:52.765741       1 controllermanager.go:554] Started "replicationcontroller"
	I0526 21:25:17.451348  527485 command_runner.go:124] ! I0526 21:23:52.765769       1 replica_set.go:182] Starting replicationcontroller controller
	I0526 21:25:17.451364  527485 command_runner.go:124] ! I0526 21:23:52.765867       1 shared_informer.go:240] Waiting for caches to sync for ReplicationController
	I0526 21:25:17.451381  527485 command_runner.go:124] ! I0526 21:23:52.915756       1 node_lifecycle_controller.go:380] Sending events to api server.
	I0526 21:25:17.451395  527485 command_runner.go:124] ! I0526 21:23:52.916150       1 taint_manager.go:163] Sending events to api server.
	I0526 21:25:17.451410  527485 command_runner.go:124] ! I0526 21:23:52.916342       1 node_lifecycle_controller.go:508] Controller will reconcile labels.
	I0526 21:25:17.451419  527485 command_runner.go:124] ! I0526 21:23:52.916386       1 controllermanager.go:554] Started "nodelifecycle"
	I0526 21:25:17.451427  527485 command_runner.go:124] ! I0526 21:23:52.916749       1 node_lifecycle_controller.go:542] Starting node controller
	I0526 21:25:17.451441  527485 command_runner.go:124] ! I0526 21:23:52.916921       1 shared_informer.go:240] Waiting for caches to sync for taint
	I0526 21:25:17.451451  527485 command_runner.go:124] ! I0526 21:23:53.165965       1 controllermanager.go:554] Started "job"
	I0526 21:25:17.451459  527485 command_runner.go:124] ! I0526 21:23:53.166025       1 job_controller.go:148] Starting job controller
	I0526 21:25:17.451468  527485 command_runner.go:124] ! I0526 21:23:53.167211       1 shared_informer.go:240] Waiting for caches to sync for job
	I0526 21:25:17.451479  527485 command_runner.go:124] ! I0526 21:23:53.170385       1 shared_informer.go:240] Waiting for caches to sync for resource quota
	I0526 21:25:17.451498  527485 command_runner.go:124] ! W0526 21:23:53.178965       1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-20210526212238-510955" does not exist
	I0526 21:25:17.451509  527485 command_runner.go:124] ! I0526 21:23:53.213010       1 shared_informer.go:247] Caches are synced for node 
	I0526 21:25:17.451517  527485 command_runner.go:124] ! I0526 21:23:53.213735       1 range_allocator.go:172] Starting range CIDR allocator
	I0526 21:25:17.451527  527485 command_runner.go:124] ! I0526 21:23:53.214071       1 shared_informer.go:240] Waiting for caches to sync for cidrallocator
	I0526 21:25:17.451537  527485 command_runner.go:124] ! I0526 21:23:53.214233       1 shared_informer.go:247] Caches are synced for cidrallocator 
	I0526 21:25:17.451549  527485 command_runner.go:124] ! I0526 21:23:53.215982       1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring 
	I0526 21:25:17.451561  527485 command_runner.go:124] ! I0526 21:23:53.216587       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-serving 
	I0526 21:25:17.451571  527485 command_runner.go:124] ! I0526 21:23:53.217085       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-client 
	I0526 21:25:17.451604  527485 command_runner.go:124] ! I0526 21:23:53.217522       1 shared_informer.go:247] Caches are synced for bootstrap_signer 
	I0526 21:25:17.451616  527485 command_runner.go:124] ! I0526 21:23:53.218215       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kube-apiserver-client 
	I0526 21:25:17.451625  527485 command_runner.go:124] ! I0526 21:23:53.218891       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-legacy-unknown 
	I0526 21:25:17.451634  527485 command_runner.go:124] ! I0526 21:23:53.229560       1 shared_informer.go:247] Caches are synced for namespace 
	I0526 21:25:17.451644  527485 command_runner.go:124] ! I0526 21:23:53.235029       1 shared_informer.go:247] Caches are synced for daemon sets 
	I0526 21:25:17.451653  527485 command_runner.go:124] ! I0526 21:23:53.238654       1 shared_informer.go:247] Caches are synced for service account 
	I0526 21:25:17.451671  527485 command_runner.go:124] ! I0526 21:23:53.240824       1 shared_informer.go:247] Caches are synced for endpoint 
	I0526 21:25:17.451686  527485 command_runner.go:124] ! I0526 21:23:53.247379       1 shared_informer.go:247] Caches are synced for certificate-csrapproving 
	I0526 21:25:17.451702  527485 command_runner.go:124] ! I0526 21:23:53.251558       1 shared_informer.go:247] Caches are synced for PVC protection 
	I0526 21:25:17.451717  527485 command_runner.go:124] ! I0526 21:23:53.252699       1 shared_informer.go:247] Caches are synced for ReplicaSet 
	I0526 21:25:17.451732  527485 command_runner.go:124] ! I0526 21:23:53.256544       1 shared_informer.go:247] Caches are synced for TTL 
	I0526 21:25:17.451750  527485 command_runner.go:124] ! I0526 21:23:53.265652       1 range_allocator.go:373] Set node multinode-20210526212238-510955 PodCIDR to [10.244.0.0/24]
	I0526 21:25:17.451765  527485 command_runner.go:124] ! I0526 21:23:53.268627       1 shared_informer.go:247] Caches are synced for job 
	I0526 21:25:17.451780  527485 command_runner.go:124] ! I0526 21:23:53.268752       1 shared_informer.go:247] Caches are synced for stateful set 
	I0526 21:25:17.451795  527485 command_runner.go:124] ! I0526 21:23:53.290037       1 shared_informer.go:247] Caches are synced for crt configmap 
	I0526 21:25:17.451811  527485 command_runner.go:124] ! I0526 21:23:53.290226       1 shared_informer.go:247] Caches are synced for endpoint_slice 
	I0526 21:25:17.451826  527485 command_runner.go:124] ! I0526 21:23:53.292847       1 shared_informer.go:247] Caches are synced for deployment 
	I0526 21:25:17.451839  527485 command_runner.go:124] ! I0526 21:23:53.293728       1 shared_informer.go:247] Caches are synced for GC 
	I0526 21:25:17.451854  527485 command_runner.go:124] ! I0526 21:23:53.293879       1 shared_informer.go:247] Caches are synced for HPA 
	I0526 21:25:17.451871  527485 command_runner.go:124] ! I0526 21:23:53.293974       1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator 
	I0526 21:25:17.451886  527485 command_runner.go:124] ! I0526 21:23:53.317816       1 shared_informer.go:247] Caches are synced for taint 
	I0526 21:25:17.451900  527485 command_runner.go:124] ! I0526 21:23:53.317927       1 node_lifecycle_controller.go:1429] Initializing eviction metric for zone: 
	I0526 21:25:17.451913  527485 command_runner.go:124] ! W0526 21:23:53.318278       1 node_lifecycle_controller.go:1044] Missing timestamp for Node multinode-20210526212238-510955. Assuming now as a timestamp.
	I0526 21:25:17.451926  527485 command_runner.go:124] ! I0526 21:23:53.318396       1 node_lifecycle_controller.go:1195] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
	I0526 21:25:17.451934  527485 command_runner.go:124] ! I0526 21:23:53.318775       1 taint_manager.go:187] Starting NoExecuteTaintManager
	I0526 21:25:17.451954  527485 command_runner.go:124] ! I0526 21:23:53.319750       1 event.go:291] "Event occurred" object="multinode-20210526212238-510955" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-20210526212238-510955 event: Registered Node multinode-20210526212238-510955 in Controller"
	I0526 21:25:17.451968  527485 command_runner.go:124] ! I0526 21:23:53.337883       1 shared_informer.go:247] Caches are synced for disruption 
	I0526 21:25:17.451978  527485 command_runner.go:124] ! I0526 21:23:53.337896       1 disruption.go:339] Sending events to api server.
	I0526 21:25:17.451986  527485 command_runner.go:124] ! I0526 21:23:53.368948       1 shared_informer.go:247] Caches are synced for ReplicationController 
	I0526 21:25:17.452001  527485 command_runner.go:124] ! I0526 21:23:53.431193       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-2wgbs"
	I0526 21:25:17.452018  527485 command_runner.go:124] ! I0526 21:23:53.431223       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-74ff55c5b to 2"
	I0526 21:25:17.452029  527485 command_runner.go:124] ! I0526 21:23:53.459736       1 shared_informer.go:247] Caches are synced for expand 
	I0526 21:25:17.452037  527485 command_runner.go:124] ! I0526 21:23:53.479631       1 shared_informer.go:247] Caches are synced for resource quota 
	I0526 21:25:17.452047  527485 command_runner.go:124] ! I0526 21:23:53.487838       1 shared_informer.go:247] Caches are synced for PV protection 
	I0526 21:25:17.452056  527485 command_runner.go:124] ! I0526 21:23:53.489356       1 shared_informer.go:247] Caches are synced for attach detach 
	I0526 21:25:17.452063  527485 command_runner.go:124] ! I0526 21:23:53.494672       1 shared_informer.go:247] Caches are synced for resource quota 
	I0526 21:25:17.452073  527485 command_runner.go:124] ! I0526 21:23:53.539359       1 shared_informer.go:247] Caches are synced for persistent volume 
	I0526 21:25:17.452087  527485 command_runner.go:124] ! I0526 21:23:53.545401       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-qbl42"
	I0526 21:25:17.452103  527485 command_runner.go:124] ! I0526 21:23:53.545422       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-z56bv"
	I0526 21:25:17.452119  527485 command_runner.go:124] ! I0526 21:23:53.556102       1 event.go:291] "Event occurred" object="kube-system/kube-apiserver-multinode-20210526212238-510955" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0526 21:25:17.452146  527485 command_runner.go:124] ! I0526 21:23:53.567036       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-tw67b"
	I0526 21:25:17.452166  527485 command_runner.go:124] ! E0526 21:23:53.635384       1 clusterroleaggregation_controller.go:181] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
	I0526 21:25:17.452179  527485 command_runner.go:124] ! I0526 21:23:53.689947       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	I0526 21:25:17.452195  527485 command_runner.go:124] ! I0526 21:23:53.733785       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-74ff55c5b to 1"
	I0526 21:25:17.452211  527485 command_runner.go:124] ! I0526 21:23:53.758013       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-74ff55c5b-z56bv"
	I0526 21:25:17.452224  527485 command_runner.go:124] ! I0526 21:23:53.906201       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0526 21:25:17.452233  527485 command_runner.go:124] ! I0526 21:23:53.937294       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0526 21:25:17.452244  527485 command_runner.go:124] ! I0526 21:23:53.937309       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0526 21:25:17.452256  527485 command_runner.go:124] ! I0526 21:24:08.320331       1 node_lifecycle_controller.go:1222] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I0526 21:25:19.962314  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods
	I0526 21:25:19.962338  527485 round_trippers.go:429] Request Headers:
	I0526 21:25:19.962343  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:25:19.962347  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:25:19.966519  527485 round_trippers.go:448] Response Status: 200 OK in 4 milliseconds
	I0526 21:25:19.966545  527485 round_trippers.go:451] Response Headers:
	I0526 21:25:19.966552  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:25:19.966558  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:25:19.966564  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:25:19.966569  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:25:19.966574  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:25:19 GMT
	I0526 21:25:19.968943  527485 request.go:1107] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"554"},"items":[{"metadata":{"name":"coredns-74ff55c5b-tw67b","generateName":"coredns-74ff55c5b-","namespace":"kube-system","uid":"a0522c32-9960-4c21-8a5a-d0b137009166","resourceVersion":"500","creationTimestamp":"2021-05-26T21:23:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"74ff55c5b"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-74ff55c5b","uid":"605a8716-bb95-4dc9-a518-ec6d2a3d080e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"605a8716-bb95-4dc9-a518-ec6d2a3d080e\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},
"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"n [truncated 52658 chars]
	I0526 21:25:19.970240  527485 system_pods.go:59] 8 kube-system pods found
	I0526 21:25:19.970288  527485 system_pods.go:61] "coredns-74ff55c5b-tw67b" [a0522c32-9960-4c21-8a5a-d0b137009166] Running
	I0526 21:25:19.970303  527485 system_pods.go:61] "etcd-multinode-20210526212238-510955" [6e073b61-d86c-4e7a-a1ad-aa5daefd710b] Running
	I0526 21:25:19.970308  527485 system_pods.go:61] "kindnet-2wgbs" [aac3ff91-8f9c-4f4e-81fc-a859f780d67d] Running
	I0526 21:25:19.970312  527485 system_pods.go:61] "kube-apiserver-multinode-20210526212238-510955" [5d446255-3487-4319-9b9f-2294a93fd226] Running
	I0526 21:25:19.970316  527485 system_pods.go:61] "kube-controller-manager-multinode-20210526212238-510955" [ff663293-6f11-48e7-9409-1637114dc587] Running
	I0526 21:25:19.970321  527485 system_pods.go:61] "kube-proxy-qbl42" [950a915d-c5f0-4e6f-bc12-ee97013032f0] Running
	I0526 21:25:19.970325  527485 system_pods.go:61] "kube-scheduler-multinode-20210526212238-510955" [66bb91fe-7af2-400f-a477-fe2dc3428e83] Running
	I0526 21:25:19.970330  527485 system_pods.go:61] "storage-provisioner" [e29f2487-1ac5-4bb4-9ab8-6b5d5c43ec36] Running
	I0526 21:25:19.970335  527485 system_pods.go:74] duration metric: took 3.182240535s to wait for pod list to return data ...
	I0526 21:25:19.970345  527485 default_sa.go:34] waiting for default service account to be created ...
	I0526 21:25:19.970396  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/default/serviceaccounts
	I0526 21:25:19.970404  527485 round_trippers.go:429] Request Headers:
	I0526 21:25:19.970408  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:25:19.970412  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:25:19.973097  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:25:19.973117  527485 round_trippers.go:451] Response Headers:
	I0526 21:25:19.973124  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:25:19.973129  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:25:19.973134  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:25:19.973140  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:25:19.973145  527485 round_trippers.go:454]     Content-Length: 304
	I0526 21:25:19.973158  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:25:19 GMT
	I0526 21:25:19.973400  527485 request.go:1107] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"554"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"7ed7b6cf-a0e1-4add-9aa6-5087c856497d","resourceVersion":"434","creationTimestamp":"2021-05-26T21:23:53Z"},"secrets":[{"name":"default-token-cdspv"}]}]}
	I0526 21:25:19.974116  527485 default_sa.go:45] found service account: "default"
	I0526 21:25:19.974136  527485 default_sa.go:55] duration metric: took 3.786239ms for default service account to be created ...
	I0526 21:25:19.974143  527485 system_pods.go:116] waiting for k8s-apps to be running ...
	I0526 21:25:19.974182  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods
	I0526 21:25:19.974190  527485 round_trippers.go:429] Request Headers:
	I0526 21:25:19.974194  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:25:19.974198  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:25:19.981719  527485 round_trippers.go:448] Response Status: 200 OK in 7 milliseconds
	I0526 21:25:19.981737  527485 round_trippers.go:451] Response Headers:
	I0526 21:25:19.981743  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:25:19.981748  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:25:19.981754  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:25:19 GMT
	I0526 21:25:19.981759  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:25:19.981776  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:25:19.982672  527485 request.go:1107] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"554"},"items":[{"metadata":{"name":"coredns-74ff55c5b-tw67b","generateName":"coredns-74ff55c5b-","namespace":"kube-system","uid":"a0522c32-9960-4c21-8a5a-d0b137009166","resourceVersion":"500","creationTimestamp":"2021-05-26T21:23:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"74ff55c5b"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-74ff55c5b","uid":"605a8716-bb95-4dc9-a518-ec6d2a3d080e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"605a8716-bb95-4dc9-a518-ec6d2a3d080e\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},
"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"n [truncated 52658 chars]
	I0526 21:25:19.983890  527485 system_pods.go:86] 8 kube-system pods found
	I0526 21:25:19.983909  527485 system_pods.go:89] "coredns-74ff55c5b-tw67b" [a0522c32-9960-4c21-8a5a-d0b137009166] Running
	I0526 21:25:19.983916  527485 system_pods.go:89] "etcd-multinode-20210526212238-510955" [6e073b61-d86c-4e7a-a1ad-aa5daefd710b] Running
	I0526 21:25:19.983925  527485 system_pods.go:89] "kindnet-2wgbs" [aac3ff91-8f9c-4f4e-81fc-a859f780d67d] Running
	I0526 21:25:19.983935  527485 system_pods.go:89] "kube-apiserver-multinode-20210526212238-510955" [5d446255-3487-4319-9b9f-2294a93fd226] Running
	I0526 21:25:19.983945  527485 system_pods.go:89] "kube-controller-manager-multinode-20210526212238-510955" [ff663293-6f11-48e7-9409-1637114dc587] Running
	I0526 21:25:19.983953  527485 system_pods.go:89] "kube-proxy-qbl42" [950a915d-c5f0-4e6f-bc12-ee97013032f0] Running
	I0526 21:25:19.983960  527485 system_pods.go:89] "kube-scheduler-multinode-20210526212238-510955" [66bb91fe-7af2-400f-a477-fe2dc3428e83] Running
	I0526 21:25:19.983964  527485 system_pods.go:89] "storage-provisioner" [e29f2487-1ac5-4bb4-9ab8-6b5d5c43ec36] Running
	I0526 21:25:19.983969  527485 system_pods.go:126] duration metric: took 9.821847ms to wait for k8s-apps to be running ...
	I0526 21:25:19.983979  527485 system_svc.go:44] waiting for kubelet service to be running ....
	I0526 21:25:19.984027  527485 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0526 21:25:19.994398  527485 system_svc.go:56] duration metric: took 10.413838ms WaitForService to wait for kubelet.
	I0526 21:25:19.994415  527485 kubeadm.go:547] duration metric: took 1m25.186288945s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0526 21:25:19.994431  527485 node_conditions.go:102] verifying NodePressure condition ...
	I0526 21:25:19.994489  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes
	I0526 21:25:19.994498  527485 round_trippers.go:429] Request Headers:
	I0526 21:25:19.994502  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:25:19.994506  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:25:20.000092  527485 round_trippers.go:448] Response Status: 200 OK in 5 milliseconds
	I0526 21:25:20.000105  527485 round_trippers.go:451] Response Headers:
	I0526 21:25:20.000110  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:25:20.000114  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:25:20.000117  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:25:20.000121  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:25:20.000125  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:25:20 GMT
	I0526 21:25:20.000269  527485 request.go:1107] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"554"},"items":[{"metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager
":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T2 [truncated 6155 chars]
	I0526 21:25:20.001209  527485 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0526 21:25:20.001246  527485 node_conditions.go:123] node cpu capacity is 2
	I0526 21:25:20.001261  527485 node_conditions.go:105] duration metric: took 6.822942ms to run NodePressure ...
	I0526 21:25:20.001275  527485 start.go:214] waiting for startup goroutines ...
	I0526 21:25:20.003439  527485 out.go:170] 
	I0526 21:25:20.003724  527485 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/multinode-20210526212238-510955/config.json ...
	I0526 21:25:20.005541  527485 out.go:170] * Starting node multinode-20210526212238-510955-m02 in cluster multinode-20210526212238-510955
	I0526 21:25:20.005562  527485 preload.go:98] Checking if preload exists for k8s version v1.20.2 and runtime containerd
	I0526 21:25:20.005598  527485 preload.go:106] Found local preload: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.20.2-containerd-overlay2-amd64.tar.lz4
	I0526 21:25:20.005611  527485 cache.go:54] Caching tarball of preloaded images
	I0526 21:25:20.005736  527485 preload.go:143] Found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.20.2-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0526 21:25:20.005755  527485 cache.go:57] Finished verifying existence of preloaded tar for  v1.20.2 on containerd
	I0526 21:25:20.005852  527485 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/multinode-20210526212238-510955/config.json ...
	I0526 21:25:20.006024  527485 cache.go:191] Successfully downloaded all kic artifacts
	I0526 21:25:20.006050  527485 start.go:313] acquiring machines lock for multinode-20210526212238-510955-m02: {Name:mk9b6c43d31e9eaa4b66476ed1274ba5b188c66b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0526 21:25:20.006128  527485 start.go:317] acquired machines lock for "multinode-20210526212238-510955-m02" in 61.64µs
	I0526 21:25:20.006171  527485 start.go:89] Provisioning new machine with config: &{Name:multinode-20210526212238-510955 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.20.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.22-1620785771-11384@sha256:f5844fe35994179bbad8dda27d4912304a2fedccdf0bf93ce8b2ec2b3b83af1c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 Cluste
rName:multinode-20210526212238-510955 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.229 Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.20.2 ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:true} &{Name:m02 IP: Port:0
KubernetesVersion:v1.20.2 ControlPlane:false Worker:true}
	I0526 21:25:20.006258  527485 start.go:126] createHost starting for "m02" (driver="kvm2")
	I0526 21:25:20.008161  527485 out.go:197] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0526 21:25:20.008265  527485 main.go:128] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0526 21:25:20.008309  527485 main.go:128] libmachine: Launching plugin server for driver kvm2
	I0526 21:25:20.019614  527485 main.go:128] libmachine: Plugin server listening at address 127.0.0.1:40867
	I0526 21:25:20.020112  527485 main.go:128] libmachine: () Calling .GetVersion
	I0526 21:25:20.020604  527485 main.go:128] libmachine: Using API Version  1
	I0526 21:25:20.020628  527485 main.go:128] libmachine: () Calling .SetConfigRaw
	I0526 21:25:20.020998  527485 main.go:128] libmachine: () Calling .GetMachineName
	I0526 21:25:20.021172  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Calling .GetMachineName
	I0526 21:25:20.021309  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Calling .DriverName
	I0526 21:25:20.021426  527485 start.go:160] libmachine.API.Create for "multinode-20210526212238-510955" (driver="kvm2")
	I0526 21:25:20.021451  527485 client.go:168] LocalClient.Create starting
	I0526 21:25:20.021484  527485 main.go:128] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/ca.pem
	I0526 21:25:20.021523  527485 main.go:128] libmachine: Decoding PEM data...
	I0526 21:25:20.021551  527485 main.go:128] libmachine: Parsing certificate...
	I0526 21:25:20.021666  527485 main.go:128] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/cert.pem
	I0526 21:25:20.021690  527485 main.go:128] libmachine: Decoding PEM data...
	I0526 21:25:20.021706  527485 main.go:128] libmachine: Parsing certificate...
	I0526 21:25:20.021765  527485 main.go:128] libmachine: Running pre-create checks...
	I0526 21:25:20.021778  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Calling .PreCreateCheck
	I0526 21:25:20.021910  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Calling .GetConfigRaw
	I0526 21:25:20.022245  527485 main.go:128] libmachine: Creating machine...
	I0526 21:25:20.022263  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Calling .Create
	I0526 21:25:20.022370  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Creating KVM machine...
	I0526 21:25:20.024804  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | found existing default KVM network
	I0526 21:25:20.024985  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | found existing private KVM network mk-multinode-20210526212238-510955
	I0526 21:25:20.025117  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Setting up store path in /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/multinode-20210526212238-510955-m02 ...
	I0526 21:25:20.025143  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Building disk image from file:///home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/iso/minikube-v1.20.0.iso
	I0526 21:25:20.025208  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | I0526 21:25:20.025100  527782 common.go:101] Making disk image using store path: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube
	I0526 21:25:20.025261  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Downloading /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/iso/minikube-v1.20.0.iso...
	I0526 21:25:20.210752  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | I0526 21:25:20.210598  527782 common.go:108] Creating ssh key: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/multinode-20210526212238-510955-m02/id_rsa...
	I0526 21:25:20.455411  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | I0526 21:25:20.455294  527782 common.go:114] Creating raw disk image: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/multinode-20210526212238-510955-m02/multinode-20210526212238-510955-m02.rawdisk...
	I0526 21:25:20.455451  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | Writing magic tar header
	I0526 21:25:20.455472  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | Writing SSH key tar header
	I0526 21:25:20.455493  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | I0526 21:25:20.455432  527782 common.go:128] Fixing permissions on /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/multinode-20210526212238-510955-m02 ...
	I0526 21:25:20.455629  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/multinode-20210526212238-510955-m02
	I0526 21:25:20.455667  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines
	I0526 21:25:20.455690  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/multinode-20210526212238-510955-m02 (perms=drwx------)
	I0526 21:25:20.455710  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube
	I0526 21:25:20.455734  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1
	I0526 21:25:20.455749  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0526 21:25:20.455768  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines (perms=drwxr-xr-x)
	I0526 21:25:20.455808  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube (perms=drwxr-xr-x)
	I0526 21:25:20.455828  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1 (perms=drwxr-xr-x)
	I0526 21:25:20.455839  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | Checking permissions on dir: /home/jenkins
	I0526 21:25:20.455858  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | Checking permissions on dir: /home
	I0526 21:25:20.455867  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | Skipping /home - not owner
	I0526 21:25:20.455880  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxr-xr-x)
	I0526 21:25:20.455895  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0526 21:25:20.455904  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Creating domain...
	I0526 21:25:20.482460  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | domain multinode-20210526212238-510955-m02 has defined MAC address 52:54:00:97:3e:6b in network default
	I0526 21:25:20.482620  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Ensuring networks are active...
	I0526 21:25:20.482652  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | domain multinode-20210526212238-510955-m02 has defined MAC address 52:54:00:9f:f1:a0 in network mk-multinode-20210526212238-510955
	I0526 21:25:20.484777  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Ensuring network default is active
	I0526 21:25:20.485081  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Ensuring network mk-multinode-20210526212238-510955 is active
	I0526 21:25:20.485392  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Getting domain xml...
	I0526 21:25:20.487191  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Creating domain...
	I0526 21:25:20.846400  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Waiting to get IP...
	I0526 21:25:20.847229  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | domain multinode-20210526212238-510955-m02 has defined MAC address 52:54:00:9f:f1:a0 in network mk-multinode-20210526212238-510955
	I0526 21:25:20.847637  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | unable to find current IP address of domain multinode-20210526212238-510955-m02 in network mk-multinode-20210526212238-510955
	I0526 21:25:20.847666  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | I0526 21:25:20.847604  527782 retry.go:31] will retry after 263.082536ms: waiting for machine to come up
	I0526 21:25:21.111830  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | domain multinode-20210526212238-510955-m02 has defined MAC address 52:54:00:9f:f1:a0 in network mk-multinode-20210526212238-510955
	I0526 21:25:21.112397  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | unable to find current IP address of domain multinode-20210526212238-510955-m02 in network mk-multinode-20210526212238-510955
	I0526 21:25:21.112428  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | I0526 21:25:21.112336  527782 retry.go:31] will retry after 381.329545ms: waiting for machine to come up
	I0526 21:25:21.494744  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | domain multinode-20210526212238-510955-m02 has defined MAC address 52:54:00:9f:f1:a0 in network mk-multinode-20210526212238-510955
	I0526 21:25:21.495238  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | unable to find current IP address of domain multinode-20210526212238-510955-m02 in network mk-multinode-20210526212238-510955
	I0526 21:25:21.495268  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | I0526 21:25:21.495174  527782 retry.go:31] will retry after 422.765636ms: waiting for machine to come up
	I0526 21:25:21.919597  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | domain multinode-20210526212238-510955-m02 has defined MAC address 52:54:00:9f:f1:a0 in network mk-multinode-20210526212238-510955
	I0526 21:25:21.919976  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | unable to find current IP address of domain multinode-20210526212238-510955-m02 in network mk-multinode-20210526212238-510955
	I0526 21:25:21.920010  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | I0526 21:25:21.919924  527782 retry.go:31] will retry after 473.074753ms: waiting for machine to come up
	I0526 21:25:22.394448  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | domain multinode-20210526212238-510955-m02 has defined MAC address 52:54:00:9f:f1:a0 in network mk-multinode-20210526212238-510955
	I0526 21:25:22.394860  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | unable to find current IP address of domain multinode-20210526212238-510955-m02 in network mk-multinode-20210526212238-510955
	I0526 21:25:22.394893  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | I0526 21:25:22.394806  527782 retry.go:31] will retry after 587.352751ms: waiting for machine to come up
	I0526 21:25:22.983159  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | domain multinode-20210526212238-510955-m02 has defined MAC address 52:54:00:9f:f1:a0 in network mk-multinode-20210526212238-510955
	I0526 21:25:22.983516  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | unable to find current IP address of domain multinode-20210526212238-510955-m02 in network mk-multinode-20210526212238-510955
	I0526 21:25:22.983545  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | I0526 21:25:22.983483  527782 retry.go:31] will retry after 834.206799ms: waiting for machine to come up
	I0526 21:25:23.819521  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | domain multinode-20210526212238-510955-m02 has defined MAC address 52:54:00:9f:f1:a0 in network mk-multinode-20210526212238-510955
	I0526 21:25:23.819856  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | unable to find current IP address of domain multinode-20210526212238-510955-m02 in network mk-multinode-20210526212238-510955
	I0526 21:25:23.819880  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | I0526 21:25:23.819817  527782 retry.go:31] will retry after 746.553905ms: waiting for machine to come up
	I0526 21:25:24.567585  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | domain multinode-20210526212238-510955-m02 has defined MAC address 52:54:00:9f:f1:a0 in network mk-multinode-20210526212238-510955
	I0526 21:25:24.567942  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | unable to find current IP address of domain multinode-20210526212238-510955-m02 in network mk-multinode-20210526212238-510955
	I0526 21:25:24.567967  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | I0526 21:25:24.567900  527782 retry.go:31] will retry after 987.362415ms: waiting for machine to come up
	I0526 21:25:25.557029  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | domain multinode-20210526212238-510955-m02 has defined MAC address 52:54:00:9f:f1:a0 in network mk-multinode-20210526212238-510955
	I0526 21:25:25.557334  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | unable to find current IP address of domain multinode-20210526212238-510955-m02 in network mk-multinode-20210526212238-510955
	I0526 21:25:25.557362  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | I0526 21:25:25.557289  527782 retry.go:31] will retry after 1.189835008s: waiting for machine to come up
	I0526 21:25:26.748560  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | domain multinode-20210526212238-510955-m02 has defined MAC address 52:54:00:9f:f1:a0 in network mk-multinode-20210526212238-510955
	I0526 21:25:26.748940  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | unable to find current IP address of domain multinode-20210526212238-510955-m02 in network mk-multinode-20210526212238-510955
	I0526 21:25:26.748974  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | I0526 21:25:26.748888  527782 retry.go:31] will retry after 1.677229867s: waiting for machine to come up
	I0526 21:25:28.428556  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | domain multinode-20210526212238-510955-m02 has defined MAC address 52:54:00:9f:f1:a0 in network mk-multinode-20210526212238-510955
	I0526 21:25:28.428929  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | unable to find current IP address of domain multinode-20210526212238-510955-m02 in network mk-multinode-20210526212238-510955
	I0526 21:25:28.428959  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | I0526 21:25:28.428848  527782 retry.go:31] will retry after 2.346016261s: waiting for machine to come up
	I0526 21:25:30.776577  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | domain multinode-20210526212238-510955-m02 has defined MAC address 52:54:00:9f:f1:a0 in network mk-multinode-20210526212238-510955
	I0526 21:25:30.777005  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | unable to find current IP address of domain multinode-20210526212238-510955-m02 in network mk-multinode-20210526212238-510955
	I0526 21:25:30.777037  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | I0526 21:25:30.776944  527782 retry.go:31] will retry after 3.36678925s: waiting for machine to come up
	I0526 21:25:34.145475  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | domain multinode-20210526212238-510955-m02 has defined MAC address 52:54:00:9f:f1:a0 in network mk-multinode-20210526212238-510955
	I0526 21:25:34.145873  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | unable to find current IP address of domain multinode-20210526212238-510955-m02 in network mk-multinode-20210526212238-510955
	I0526 21:25:34.145899  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | I0526 21:25:34.145842  527782 retry.go:31] will retry after 3.11822781s: waiting for machine to come up
	I0526 21:25:37.267146  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | domain multinode-20210526212238-510955-m02 has defined MAC address 52:54:00:9f:f1:a0 in network mk-multinode-20210526212238-510955
	I0526 21:25:37.267618  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Found IP for machine: 192.168.39.87
	I0526 21:25:37.267643  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Reserving static IP address...
	I0526 21:25:37.267663  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | domain multinode-20210526212238-510955-m02 has current primary IP address 192.168.39.87 and MAC address 52:54:00:9f:f1:a0 in network mk-multinode-20210526212238-510955
	I0526 21:25:37.267985  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | unable to find host DHCP lease matching {name: "multinode-20210526212238-510955-m02", mac: "52:54:00:9f:f1:a0", ip: "192.168.39.87"} in network mk-multinode-20210526212238-510955
	I0526 21:25:37.318746  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | Getting to WaitForSSH function...
	I0526 21:25:37.318801  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Reserved static IP address: 192.168.39.87
	I0526 21:25:37.318818  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Waiting for SSH to be available...
	I0526 21:25:37.324082  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | domain multinode-20210526212238-510955-m02 has defined MAC address 52:54:00:9f:f1:a0 in network mk-multinode-20210526212238-510955
	I0526 21:25:37.324481  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:f1:a0", ip: ""} in network mk-multinode-20210526212238-510955: {Iface:virbr1 ExpiryTime:2021-05-26 22:25:34 +0000 UTC Type:0 Mac:52:54:00:9f:f1:a0 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:minikube Clientid:01:52:54:00:9f:f1:a0}
	I0526 21:25:37.324518  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | domain multinode-20210526212238-510955-m02 has defined IP address 192.168.39.87 and MAC address 52:54:00:9f:f1:a0 in network mk-multinode-20210526212238-510955
	I0526 21:25:37.324641  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | Using SSH client type: external
	I0526 21:25:37.324674  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/multinode-20210526212238-510955-m02/id_rsa (-rw-------)
	I0526 21:25:37.324716  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.87 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/multinode-20210526212238-510955-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0526 21:25:37.324732  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | About to run SSH command:
	I0526 21:25:37.324745  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | exit 0
	I0526 21:25:37.460600  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | SSH cmd err, output: <nil>: 
	I0526 21:25:37.461021  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) KVM machine creation complete!
	I0526 21:25:37.461113  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Calling .GetConfigRaw
	I0526 21:25:37.461703  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Calling .DriverName
	I0526 21:25:37.461920  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Calling .DriverName
	I0526 21:25:37.462073  527485 main.go:128] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0526 21:25:37.462096  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Calling .GetState
	I0526 21:25:37.464483  527485 main.go:128] libmachine: Detecting operating system of created instance...
	I0526 21:25:37.464498  527485 main.go:128] libmachine: Waiting for SSH to be available...
	I0526 21:25:37.464505  527485 main.go:128] libmachine: Getting to WaitForSSH function...
	I0526 21:25:37.464512  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Calling .GetSSHHostname
	I0526 21:25:37.468822  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | domain multinode-20210526212238-510955-m02 has defined MAC address 52:54:00:9f:f1:a0 in network mk-multinode-20210526212238-510955
	I0526 21:25:37.469158  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:f1:a0", ip: ""} in network mk-multinode-20210526212238-510955: {Iface:virbr1 ExpiryTime:2021-05-26 22:25:34 +0000 UTC Type:0 Mac:52:54:00:9f:f1:a0 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:multinode-20210526212238-510955-m02 Clientid:01:52:54:00:9f:f1:a0}
	I0526 21:25:37.469191  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | domain multinode-20210526212238-510955-m02 has defined IP address 192.168.39.87 and MAC address 52:54:00:9f:f1:a0 in network mk-multinode-20210526212238-510955
	I0526 21:25:37.469270  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Calling .GetSSHPort
	I0526 21:25:37.469439  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Calling .GetSSHKeyPath
	I0526 21:25:37.469592  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Calling .GetSSHKeyPath
	I0526 21:25:37.469696  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Calling .GetSSHUsername
	I0526 21:25:37.469829  527485 main.go:128] libmachine: Using SSH client type: native
	I0526 21:25:37.470066  527485 main.go:128] libmachine: &{{{<nil> 0 [] [] []} docker [0x802c00] 0x802bc0 <nil>  [] 0s} 192.168.39.87 22 <nil> <nil>}
	I0526 21:25:37.470086  527485 main.go:128] libmachine: About to run SSH command:
	exit 0
	I0526 21:25:37.591991  527485 main.go:128] libmachine: SSH cmd err, output: <nil>: 
	I0526 21:25:37.592015  527485 main.go:128] libmachine: Detecting the provisioner...
	I0526 21:25:37.592026  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Calling .GetSSHHostname
	I0526 21:25:37.596752  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | domain multinode-20210526212238-510955-m02 has defined MAC address 52:54:00:9f:f1:a0 in network mk-multinode-20210526212238-510955
	I0526 21:25:37.597079  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:f1:a0", ip: ""} in network mk-multinode-20210526212238-510955: {Iface:virbr1 ExpiryTime:2021-05-26 22:25:34 +0000 UTC Type:0 Mac:52:54:00:9f:f1:a0 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:multinode-20210526212238-510955-m02 Clientid:01:52:54:00:9f:f1:a0}
	I0526 21:25:37.597107  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | domain multinode-20210526212238-510955-m02 has defined IP address 192.168.39.87 and MAC address 52:54:00:9f:f1:a0 in network mk-multinode-20210526212238-510955
	I0526 21:25:37.597212  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Calling .GetSSHPort
	I0526 21:25:37.597391  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Calling .GetSSHKeyPath
	I0526 21:25:37.597550  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Calling .GetSSHKeyPath
	I0526 21:25:37.597690  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Calling .GetSSHUsername
	I0526 21:25:37.597835  527485 main.go:128] libmachine: Using SSH client type: native
	I0526 21:25:37.597964  527485 main.go:128] libmachine: &{{{<nil> 0 [] [] []} docker [0x802c00] 0x802bc0 <nil>  [] 0s} 192.168.39.87 22 <nil> <nil>}
	I0526 21:25:37.597976  527485 main.go:128] libmachine: About to run SSH command:
	cat /etc/os-release
	I0526 21:25:37.722027  527485 main.go:128] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2020.02.12
	ID=buildroot
	VERSION_ID=2020.02.12
	PRETTY_NAME="Buildroot 2020.02.12"
	
	I0526 21:25:37.722148  527485 main.go:128] libmachine: found compatible host: buildroot
	I0526 21:25:37.722162  527485 main.go:128] libmachine: Provisioning with buildroot...
	I0526 21:25:37.722176  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Calling .GetMachineName
	I0526 21:25:37.722435  527485 buildroot.go:166] provisioning hostname "multinode-20210526212238-510955-m02"
	I0526 21:25:37.722468  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Calling .GetMachineName
	I0526 21:25:37.722618  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Calling .GetSSHHostname
	I0526 21:25:37.728325  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | domain multinode-20210526212238-510955-m02 has defined MAC address 52:54:00:9f:f1:a0 in network mk-multinode-20210526212238-510955
	I0526 21:25:37.728682  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:f1:a0", ip: ""} in network mk-multinode-20210526212238-510955: {Iface:virbr1 ExpiryTime:2021-05-26 22:25:34 +0000 UTC Type:0 Mac:52:54:00:9f:f1:a0 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:multinode-20210526212238-510955-m02 Clientid:01:52:54:00:9f:f1:a0}
	I0526 21:25:37.728708  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | domain multinode-20210526212238-510955-m02 has defined IP address 192.168.39.87 and MAC address 52:54:00:9f:f1:a0 in network mk-multinode-20210526212238-510955
	I0526 21:25:37.728918  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Calling .GetSSHPort
	I0526 21:25:37.729093  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Calling .GetSSHKeyPath
	I0526 21:25:37.729268  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Calling .GetSSHKeyPath
	I0526 21:25:37.729423  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Calling .GetSSHUsername
	I0526 21:25:37.729569  527485 main.go:128] libmachine: Using SSH client type: native
	I0526 21:25:37.729707  527485 main.go:128] libmachine: &{{{<nil> 0 [] [] []} docker [0x802c00] 0x802bc0 <nil>  [] 0s} 192.168.39.87 22 <nil> <nil>}
	I0526 21:25:37.729738  527485 main.go:128] libmachine: About to run SSH command:
	sudo hostname multinode-20210526212238-510955-m02 && echo "multinode-20210526212238-510955-m02" | sudo tee /etc/hostname
	I0526 21:25:37.861077  527485 main.go:128] libmachine: SSH cmd err, output: <nil>: multinode-20210526212238-510955-m02
	
	I0526 21:25:37.861117  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Calling .GetSSHHostname
	I0526 21:25:37.866154  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | domain multinode-20210526212238-510955-m02 has defined MAC address 52:54:00:9f:f1:a0 in network mk-multinode-20210526212238-510955
	I0526 21:25:37.866468  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:f1:a0", ip: ""} in network mk-multinode-20210526212238-510955: {Iface:virbr1 ExpiryTime:2021-05-26 22:25:34 +0000 UTC Type:0 Mac:52:54:00:9f:f1:a0 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:multinode-20210526212238-510955-m02 Clientid:01:52:54:00:9f:f1:a0}
	I0526 21:25:37.866503  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | domain multinode-20210526212238-510955-m02 has defined IP address 192.168.39.87 and MAC address 52:54:00:9f:f1:a0 in network mk-multinode-20210526212238-510955
	I0526 21:25:37.866603  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Calling .GetSSHPort
	I0526 21:25:37.866801  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Calling .GetSSHKeyPath
	I0526 21:25:37.866961  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Calling .GetSSHKeyPath
	I0526 21:25:37.867121  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Calling .GetSSHUsername
	I0526 21:25:37.867324  527485 main.go:128] libmachine: Using SSH client type: native
	I0526 21:25:37.867465  527485 main.go:128] libmachine: &{{{<nil> 0 [] [] []} docker [0x802c00] 0x802bc0 <nil>  [] 0s} 192.168.39.87 22 <nil> <nil>}
	I0526 21:25:37.867488  527485 main.go:128] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-20210526212238-510955-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-20210526212238-510955-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-20210526212238-510955-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0526 21:25:37.994577  527485 main.go:128] libmachine: SSH cmd err, output: <nil>: 
	I0526 21:25:37.994605  527485 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikub
e/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube}
	I0526 21:25:37.994632  527485 buildroot.go:174] setting up certificates
	I0526 21:25:37.994644  527485 provision.go:83] configureAuth start
	I0526 21:25:37.994657  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Calling .GetMachineName
	I0526 21:25:37.994877  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Calling .GetIP
	I0526 21:25:37.999693  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | domain multinode-20210526212238-510955-m02 has defined MAC address 52:54:00:9f:f1:a0 in network mk-multinode-20210526212238-510955
	I0526 21:25:37.999978  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:f1:a0", ip: ""} in network mk-multinode-20210526212238-510955: {Iface:virbr1 ExpiryTime:2021-05-26 22:25:34 +0000 UTC Type:0 Mac:52:54:00:9f:f1:a0 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:multinode-20210526212238-510955-m02 Clientid:01:52:54:00:9f:f1:a0}
	I0526 21:25:38.000004  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | domain multinode-20210526212238-510955-m02 has defined IP address 192.168.39.87 and MAC address 52:54:00:9f:f1:a0 in network mk-multinode-20210526212238-510955
	I0526 21:25:38.000166  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Calling .GetSSHHostname
	I0526 21:25:38.004413  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | domain multinode-20210526212238-510955-m02 has defined MAC address 52:54:00:9f:f1:a0 in network mk-multinode-20210526212238-510955
	I0526 21:25:38.004690  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:f1:a0", ip: ""} in network mk-multinode-20210526212238-510955: {Iface:virbr1 ExpiryTime:2021-05-26 22:25:34 +0000 UTC Type:0 Mac:52:54:00:9f:f1:a0 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:multinode-20210526212238-510955-m02 Clientid:01:52:54:00:9f:f1:a0}
	I0526 21:25:38.004721  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | domain multinode-20210526212238-510955-m02 has defined IP address 192.168.39.87 and MAC address 52:54:00:9f:f1:a0 in network mk-multinode-20210526212238-510955
	I0526 21:25:38.004820  527485 provision.go:137] copyHostCerts
	I0526 21:25:38.004856  527485 vm_assets.go:98] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/ca.pem
	I0526 21:25:38.004949  527485 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/ca.pem, removing ...
	I0526 21:25:38.004962  527485 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/ca.pem
	I0526 21:25:38.005019  527485 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/ca.pem (1078 bytes)
	I0526 21:25:38.005112  527485 vm_assets.go:98] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cert.pem
	I0526 21:25:38.005137  527485 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cert.pem, removing ...
	I0526 21:25:38.005142  527485 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cert.pem
	I0526 21:25:38.005165  527485 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cert.pem (1123 bytes)
	I0526 21:25:38.005210  527485 vm_assets.go:98] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/key.pem
	I0526 21:25:38.005231  527485 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/key.pem, removing ...
	I0526 21:25:38.005239  527485 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/key.pem
	I0526 21:25:38.005262  527485 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/key.pem (1679 bytes)
	I0526 21:25:38.005306  527485 provision.go:111] generating server cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/ca-key.pem org=jenkins.multinode-20210526212238-510955-m02 san=[192.168.39.87 192.168.39.87 localhost 127.0.0.1 minikube multinode-20210526212238-510955-m02]
	I0526 21:25:38.122346  527485 provision.go:171] copyRemoteCerts
	I0526 21:25:38.122396  527485 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0526 21:25:38.122419  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Calling .GetSSHHostname
	I0526 21:25:38.126872  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | domain multinode-20210526212238-510955-m02 has defined MAC address 52:54:00:9f:f1:a0 in network mk-multinode-20210526212238-510955
	I0526 21:25:38.127179  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:f1:a0", ip: ""} in network mk-multinode-20210526212238-510955: {Iface:virbr1 ExpiryTime:2021-05-26 22:25:34 +0000 UTC Type:0 Mac:52:54:00:9f:f1:a0 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:multinode-20210526212238-510955-m02 Clientid:01:52:54:00:9f:f1:a0}
	I0526 21:25:38.127205  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | domain multinode-20210526212238-510955-m02 has defined IP address 192.168.39.87 and MAC address 52:54:00:9f:f1:a0 in network mk-multinode-20210526212238-510955
	I0526 21:25:38.127381  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Calling .GetSSHPort
	I0526 21:25:38.127545  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Calling .GetSSHKeyPath
	I0526 21:25:38.127680  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Calling .GetSSHUsername
	I0526 21:25:38.127805  527485 sshutil.go:53] new ssh client: &{IP:192.168.39.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/multinode-20210526212238-510955-m02/id_rsa Username:docker}
	I0526 21:25:38.216400  527485 vm_assets.go:98] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0526 21:25:38.216440  527485 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0526 21:25:38.233050  527485 vm_assets.go:98] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0526 21:25:38.233087  527485 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/server.pem --> /etc/docker/server.pem (1277 bytes)
	I0526 21:25:38.249070  527485 vm_assets.go:98] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0526 21:25:38.249106  527485 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0526 21:25:38.265161  527485 provision.go:86] duration metric: configureAuth took 270.50695ms
	I0526 21:25:38.265183  527485 buildroot.go:189] setting minikube options for container-runtime
	I0526 21:25:38.265377  527485 main.go:128] libmachine: Checking connection to Docker...
	I0526 21:25:38.265397  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Calling .GetURL
	I0526 21:25:38.267393  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | Using libvirt version 3000000
	I0526 21:25:38.271542  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | domain multinode-20210526212238-510955-m02 has defined MAC address 52:54:00:9f:f1:a0 in network mk-multinode-20210526212238-510955
	I0526 21:25:38.271872  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:f1:a0", ip: ""} in network mk-multinode-20210526212238-510955: {Iface:virbr1 ExpiryTime:2021-05-26 22:25:34 +0000 UTC Type:0 Mac:52:54:00:9f:f1:a0 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:multinode-20210526212238-510955-m02 Clientid:01:52:54:00:9f:f1:a0}
	I0526 21:25:38.271897  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | domain multinode-20210526212238-510955-m02 has defined IP address 192.168.39.87 and MAC address 52:54:00:9f:f1:a0 in network mk-multinode-20210526212238-510955
	I0526 21:25:38.272016  527485 main.go:128] libmachine: Docker is up and running!
	I0526 21:25:38.272031  527485 main.go:128] libmachine: Reticulating splines...
	I0526 21:25:38.272037  527485 client.go:171] LocalClient.Create took 18.250578511s
	I0526 21:25:38.272055  527485 start.go:168] duration metric: libmachine.API.Create for "multinode-20210526212238-510955" took 18.250628879s
	I0526 21:25:38.272067  527485 start.go:267] post-start starting for "multinode-20210526212238-510955-m02" (driver="kvm2")
	I0526 21:25:38.272074  527485 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0526 21:25:38.272089  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Calling .DriverName
	I0526 21:25:38.272263  527485 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0526 21:25:38.272282  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Calling .GetSSHHostname
	I0526 21:25:38.277376  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | domain multinode-20210526212238-510955-m02 has defined MAC address 52:54:00:9f:f1:a0 in network mk-multinode-20210526212238-510955
	I0526 21:25:38.277710  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:f1:a0", ip: ""} in network mk-multinode-20210526212238-510955: {Iface:virbr1 ExpiryTime:2021-05-26 22:25:34 +0000 UTC Type:0 Mac:52:54:00:9f:f1:a0 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:multinode-20210526212238-510955-m02 Clientid:01:52:54:00:9f:f1:a0}
	I0526 21:25:38.277742  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | domain multinode-20210526212238-510955-m02 has defined IP address 192.168.39.87 and MAC address 52:54:00:9f:f1:a0 in network mk-multinode-20210526212238-510955
	I0526 21:25:38.277853  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Calling .GetSSHPort
	I0526 21:25:38.278052  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Calling .GetSSHKeyPath
	I0526 21:25:38.278206  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Calling .GetSSHUsername
	I0526 21:25:38.278364  527485 sshutil.go:53] new ssh client: &{IP:192.168.39.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/multinode-20210526212238-510955-m02/id_rsa Username:docker}
	I0526 21:25:38.365155  527485 ssh_runner.go:149] Run: cat /etc/os-release
	I0526 21:25:38.369609  527485 command_runner.go:124] > NAME=Buildroot
	I0526 21:25:38.369632  527485 command_runner.go:124] > VERSION=2020.02.12
	I0526 21:25:38.369642  527485 command_runner.go:124] > ID=buildroot
	I0526 21:25:38.369651  527485 command_runner.go:124] > VERSION_ID=2020.02.12
	I0526 21:25:38.369659  527485 command_runner.go:124] > PRETTY_NAME="Buildroot 2020.02.12"
	I0526 21:25:38.369698  527485 info.go:137] Remote host: Buildroot 2020.02.12
	I0526 21:25:38.369715  527485 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/addons for local assets ...
	I0526 21:25:38.369772  527485 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/files for local assets ...
	I0526 21:25:38.369890  527485 start.go:270] post-start completed in 97.814032ms
	I0526 21:25:38.369923  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Calling .GetConfigRaw
	I0526 21:25:38.370477  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Calling .GetIP
	I0526 21:25:38.375243  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | domain multinode-20210526212238-510955-m02 has defined MAC address 52:54:00:9f:f1:a0 in network mk-multinode-20210526212238-510955
	I0526 21:25:38.375578  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:f1:a0", ip: ""} in network mk-multinode-20210526212238-510955: {Iface:virbr1 ExpiryTime:2021-05-26 22:25:34 +0000 UTC Type:0 Mac:52:54:00:9f:f1:a0 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:multinode-20210526212238-510955-m02 Clientid:01:52:54:00:9f:f1:a0}
	I0526 21:25:38.375610  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | domain multinode-20210526212238-510955-m02 has defined IP address 192.168.39.87 and MAC address 52:54:00:9f:f1:a0 in network mk-multinode-20210526212238-510955
	I0526 21:25:38.375807  527485 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/multinode-20210526212238-510955/config.json ...
	I0526 21:25:38.375958  527485 start.go:129] duration metric: createHost completed in 18.369690652s
	I0526 21:25:38.375973  527485 start.go:80] releasing machines lock for "multinode-20210526212238-510955-m02", held for 18.369819153s
	I0526 21:25:38.376005  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Calling .DriverName
	I0526 21:25:38.376167  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Calling .GetIP
	I0526 21:25:38.380405  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | domain multinode-20210526212238-510955-m02 has defined MAC address 52:54:00:9f:f1:a0 in network mk-multinode-20210526212238-510955
	I0526 21:25:38.380712  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:f1:a0", ip: ""} in network mk-multinode-20210526212238-510955: {Iface:virbr1 ExpiryTime:2021-05-26 22:25:34 +0000 UTC Type:0 Mac:52:54:00:9f:f1:a0 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:multinode-20210526212238-510955-m02 Clientid:01:52:54:00:9f:f1:a0}
	I0526 21:25:38.380745  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | domain multinode-20210526212238-510955-m02 has defined IP address 192.168.39.87 and MAC address 52:54:00:9f:f1:a0 in network mk-multinode-20210526212238-510955
	I0526 21:25:38.383172  527485 out.go:170] * Found network options:
	I0526 21:25:38.384873  527485 out.go:170]   - NO_PROXY=192.168.39.229
	W0526 21:25:38.384918  527485 proxy.go:118] fail to check proxy env: Error ip not in block
	I0526 21:25:38.384943  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Calling .DriverName
	I0526 21:25:38.385107  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Calling .DriverName
	I0526 21:25:38.385552  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Calling .DriverName
	W0526 21:25:38.385726  527485 proxy.go:118] fail to check proxy env: Error ip not in block
	I0526 21:25:38.385811  527485 preload.go:98] Checking if preload exists for k8s version v1.20.2 and runtime containerd
	I0526 21:25:38.385838  527485 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0526 21:25:38.385885  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Calling .GetSSHHostname
	I0526 21:25:38.385836  527485 preload.go:106] Found local preload: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.20.2-containerd-overlay2-amd64.tar.lz4
	I0526 21:25:38.385989  527485 ssh_runner.go:149] Run: sudo crictl images --output json
	I0526 21:25:38.386012  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Calling .GetSSHHostname
	I0526 21:25:38.390617  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | domain multinode-20210526212238-510955-m02 has defined MAC address 52:54:00:9f:f1:a0 in network mk-multinode-20210526212238-510955
	I0526 21:25:38.390995  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:f1:a0", ip: ""} in network mk-multinode-20210526212238-510955: {Iface:virbr1 ExpiryTime:2021-05-26 22:25:34 +0000 UTC Type:0 Mac:52:54:00:9f:f1:a0 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:multinode-20210526212238-510955-m02 Clientid:01:52:54:00:9f:f1:a0}
	I0526 21:25:38.391027  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | domain multinode-20210526212238-510955-m02 has defined IP address 192.168.39.87 and MAC address 52:54:00:9f:f1:a0 in network mk-multinode-20210526212238-510955
	I0526 21:25:38.391147  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Calling .GetSSHPort
	I0526 21:25:38.391291  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Calling .GetSSHKeyPath
	I0526 21:25:38.391468  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Calling .GetSSHUsername
	I0526 21:25:38.391600  527485 sshutil.go:53] new ssh client: &{IP:192.168.39.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/multinode-20210526212238-510955-m02/id_rsa Username:docker}
	I0526 21:25:38.391767  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | domain multinode-20210526212238-510955-m02 has defined MAC address 52:54:00:9f:f1:a0 in network mk-multinode-20210526212238-510955
	I0526 21:25:38.392091  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:f1:a0", ip: ""} in network mk-multinode-20210526212238-510955: {Iface:virbr1 ExpiryTime:2021-05-26 22:25:34 +0000 UTC Type:0 Mac:52:54:00:9f:f1:a0 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:multinode-20210526212238-510955-m02 Clientid:01:52:54:00:9f:f1:a0}
	I0526 21:25:38.392125  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | domain multinode-20210526212238-510955-m02 has defined IP address 192.168.39.87 and MAC address 52:54:00:9f:f1:a0 in network mk-multinode-20210526212238-510955
	I0526 21:25:38.392248  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Calling .GetSSHPort
	I0526 21:25:38.392383  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Calling .GetSSHKeyPath
	I0526 21:25:38.392515  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Calling .GetSSHUsername
	I0526 21:25:38.392646  527485 sshutil.go:53] new ssh client: &{IP:192.168.39.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/multinode-20210526212238-510955-m02/id_rsa Username:docker}
	I0526 21:25:42.476566  527485 command_runner.go:124] > {
	I0526 21:25:42.476594  527485 command_runner.go:124] >   "images": [
	I0526 21:25:42.476599  527485 command_runner.go:124] >   ]
	I0526 21:25:42.476602  527485 command_runner.go:124] > }
	I0526 21:25:42.477668  527485 command_runner.go:124] ! time="2021-05-26T21:25:38Z" level=warning msg="image connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock]. As the default settings are now deprecated, you should set the endpoint instead."
	I0526 21:25:42.477689  527485 command_runner.go:124] ! time="2021-05-26T21:25:40Z" level=error msg="connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	I0526 21:25:42.477710  527485 command_runner.go:124] ! time="2021-05-26T21:25:42Z" level=error msg="connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	I0526 21:25:42.477730  527485 ssh_runner.go:189] Completed: sudo crictl images --output json: (4.09172304s)
	I0526 21:25:42.477759  527485 containerd.go:566] couldn't find preloaded image for "k8s.gcr.io/kube-apiserver:v1.20.2". assuming images are not preloaded.
	I0526 21:25:42.477765  527485 command_runner.go:124] > <HTML><HEAD><meta http-equiv="content-type" content="text/html;charset=utf-8">
	I0526 21:25:42.477794  527485 command_runner.go:124] > <TITLE>302 Moved</TITLE></HEAD><BODY>
	I0526 21:25:42.477806  527485 command_runner.go:124] > <H1>302 Moved</H1>
	I0526 21:25:42.477810  527485 command_runner.go:124] > The document has moved
	I0526 21:25:42.477810  527485 ssh_runner.go:149] Run: which lz4
	I0526 21:25:42.477820  527485 command_runner.go:124] > <A HREF="https://cloud.google.com/container-registry/">here</A>.
	I0526 21:25:42.477831  527485 command_runner.go:124] > </BODY></HTML>
	I0526 21:25:42.477854  527485 ssh_runner.go:189] Completed: curl -sS -m 2 https://k8s.gcr.io/: (4.091991492s)
	I0526 21:25:42.482008  527485 command_runner.go:124] > /bin/lz4
	I0526 21:25:42.482258  527485 vm_assets.go:98] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.20.2-containerd-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0526 21:25:42.482333  527485 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0526 21:25:42.486505  527485 command_runner.go:124] ! stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	I0526 21:25:42.486947  527485 ssh_runner.go:306] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	I0526 21:25:42.486974  527485 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.20.2-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (953722271 bytes)
	I0526 21:25:46.520241  527485 containerd.go:503] Took 4.037934 seconds to copy over tarball
	I0526 21:25:46.520321  527485 ssh_runner.go:149] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0526 21:25:52.928908  527485 ssh_runner.go:189] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (6.408556452s)
	I0526 21:25:52.928937  527485 containerd.go:510] Took 6.408662 seconds t extract the tarball
	I0526 21:25:52.928948  527485 ssh_runner.go:100] rm: /preloaded.tar.lz4
	I0526 21:25:52.988755  527485 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0526 21:25:53.144586  527485 ssh_runner.go:149] Run: sudo systemctl restart containerd
	I0526 21:25:53.196649  527485 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0526 21:25:53.207985  527485 ssh_runner.go:149] Run: sudo systemctl stop -f crio
	I0526 21:25:53.248870  527485 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0526 21:25:53.260809  527485 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0526 21:25:53.271305  527485 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0526 21:25:53.285912  527485 command_runner.go:124] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0526 21:25:53.285938  527485 command_runner.go:124] > image-endpoint: unix:///run/containerd/containerd.sock
	I0526 21:25:53.286122  527485 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "cm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKCltncnBjXQogIGFkZHJlc3MgPSAiL3J1bi9jb250YWluZXJkL2NvbnRhaW5lcmQuc29jayIKICB1aWQgPSAwCiAgZ2lkID0gMAogIG1heF9yZWN2X21lc3NhZ2Vfc2l6ZSA9IDE2Nzc3MjE2CiAgbWF4X3NlbmRfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKCltkZWJ1Z10KICBhZGRyZXNzID0gIiIKICB1aWQgPSAwCiAgZ2lkID0gMAogIGxldmVsID0gIiIKClttZXRyaWNzXQogIGFkZHJlc3MgPSAiIgogIGdycGNfaGlzdG9ncmFtID0gZmFsc2UKCltjZ3JvdXBdCiAgcGF0aCA9ICIiCgpbcGx1Z2luc10KICBbcGx1Z2lucy5jZ3JvdXBzXQogICAgbm9fcHJvbWV0aGV1cyA9IGZhbHNlCiAgW3BsdWdpbnMuY3JpXQogICAgc3RyZWFtX3NlcnZlcl9hZGRyZXNzID0gIiIKICAgIHN0cmVhbV9zZXJ2ZXJfcG9ydCA9ICIxMDAxMCIKICAgIGVuYWJsZV9zZWxpbnV4ID0gZmFsc2UKICAgIHNhbmRib3hfaW1hZ2UgPSAiazhzLmdjci5pby9wYXVzZTozLjIiCiAgICBzdGF0c19jb2xsZWN0X3BlcmlvZCA9IDEwCiAgICBzeXN0ZW1kX2Nncm91cCA9IGZhbHNlCiAgICBlbmFibGVfdGxzX3N0cmVhbWluZyA9IGZhbHNlCiAgICBtYXhfY29udGFpbmVyX2xvZ19saW5lX3Npe
mUgPSAxNjM4NAogICAgW3BsdWdpbnMuY3JpLmNvbnRhaW5lcmRdCiAgICAgIHNuYXBzaG90dGVyID0gIm92ZXJsYXlmcyIKICAgICAgW3BsdWdpbnMuY3JpLmNvbnRhaW5lcmQuZGVmYXVsdF9ydW50aW1lXQogICAgICAgIHJ1bnRpbWVfdHlwZSA9ICJpby5jb250YWluZXJkLnJ1bmMudjIiCiAgICAgICAgW3BsdWdpbnMuY3JpLmNvbnRhaW5lcmQuZGVmYXVsdF9ydW50aW1lLm9wdGlvbnNdCiAgICAgICAgICBOb1Bpdm90Um9vdCA9IHRydWUKICAgICAgW3BsdWdpbnMuY3JpLmNvbnRhaW5lcmQudW50cnVzdGVkX3dvcmtsb2FkX3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gIiIKICAgICAgICBydW50aW1lX2VuZ2luZSA9ICIiCiAgICAgICAgcnVudGltZV9yb290ID0gIiIKICAgIFtwbHVnaW5zLmNyaS5jbmldCiAgICAgIGJpbl9kaXIgPSAiL29wdC9jbmkvYmluIgogICAgICBjb25mX2RpciA9ICIvZXRjL2NuaS9uZXQubWsiCiAgICAgIGNvbmZfdGVtcGxhdGUgPSAiIgogICAgW3BsdWdpbnMuY3JpLnJlZ2lzdHJ5XQogICAgICBbcGx1Z2lucy5jcmkucmVnaXN0cnkubWlycm9yc10KICAgICAgICBbcGx1Z2lucy5jcmkucmVnaXN0cnkubWlycm9ycy4iZG9ja2VyLmlvIl0KICAgICAgICAgIGVuZHBvaW50ID0gWyJodHRwczovL3JlZ2lzdHJ5LTEuZG9ja2VyLmlvIl0KICAgICAgICBbcGx1Z2lucy5kaWZmLXNlcnZpY2VdCiAgICBkZWZhdWx0ID0gWyJ3YWxraW5nIl0KICBbcGx1Z2lucy5zY2hlZHVsZXJdCiAgICBwYXVzZV90aHJlc2hvb
GQgPSAwLjAyCiAgICBkZWxldGlvbl90aHJlc2hvbGQgPSAwCiAgICBtdXRhdGlvbl90aHJlc2hvbGQgPSAxMDAKICAgIHNjaGVkdWxlX2RlbGF5ID0gIjBzIgogICAgc3RhcnR1cF9kZWxheSA9ICIxMDBtcyIK" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0526 21:25:53.299753  527485 command_runner.go:124] > root = "/var/lib/containerd"
	I0526 21:25:53.299768  527485 command_runner.go:124] > state = "/run/containerd"
	I0526 21:25:53.299774  527485 command_runner.go:124] > oom_score = 0
	I0526 21:25:53.299777  527485 command_runner.go:124] > [grpc]
	I0526 21:25:53.299782  527485 command_runner.go:124] >   address = "/run/containerd/containerd.sock"
	I0526 21:25:53.299788  527485 command_runner.go:124] >   uid = 0
	I0526 21:25:53.299793  527485 command_runner.go:124] >   gid = 0
	I0526 21:25:53.299800  527485 command_runner.go:124] >   max_recv_message_size = 16777216
	I0526 21:25:53.299808  527485 command_runner.go:124] >   max_send_message_size = 16777216
	I0526 21:25:53.299815  527485 command_runner.go:124] > [debug]
	I0526 21:25:53.299821  527485 command_runner.go:124] >   address = ""
	I0526 21:25:53.299834  527485 command_runner.go:124] >   uid = 0
	I0526 21:25:53.299839  527485 command_runner.go:124] >   gid = 0
	I0526 21:25:53.299844  527485 command_runner.go:124] >   level = ""
	I0526 21:25:53.299851  527485 command_runner.go:124] > [metrics]
	I0526 21:25:53.299859  527485 command_runner.go:124] >   address = ""
	I0526 21:25:53.299867  527485 command_runner.go:124] >   grpc_histogram = false
	I0526 21:25:53.299873  527485 command_runner.go:124] > [cgroup]
	I0526 21:25:53.299880  527485 command_runner.go:124] >   path = ""
	I0526 21:25:53.299887  527485 command_runner.go:124] > [plugins]
	I0526 21:25:53.299897  527485 command_runner.go:124] >   [plugins.cgroups]
	I0526 21:25:53.299907  527485 command_runner.go:124] >     no_prometheus = false
	I0526 21:25:53.299912  527485 command_runner.go:124] >   [plugins.cri]
	I0526 21:25:53.299919  527485 command_runner.go:124] >     stream_server_address = ""
	I0526 21:25:53.299930  527485 command_runner.go:124] >     stream_server_port = "10010"
	I0526 21:25:53.299938  527485 command_runner.go:124] >     enable_selinux = false
	I0526 21:25:53.299952  527485 command_runner.go:124] >     sandbox_image = "k8s.gcr.io/pause:3.2"
	I0526 21:25:53.299960  527485 command_runner.go:124] >     stats_collect_period = 10
	I0526 21:25:53.299965  527485 command_runner.go:124] >     systemd_cgroup = false
	I0526 21:25:53.299972  527485 command_runner.go:124] >     enable_tls_streaming = false
	I0526 21:25:53.299977  527485 command_runner.go:124] >     max_container_log_line_size = 16384
	I0526 21:25:53.299982  527485 command_runner.go:124] >     [plugins.cri.containerd]
	I0526 21:25:53.299987  527485 command_runner.go:124] >       snapshotter = "overlayfs"
	I0526 21:25:53.299992  527485 command_runner.go:124] >       [plugins.cri.containerd.default_runtime]
	I0526 21:25:53.299998  527485 command_runner.go:124] >         runtime_type = "io.containerd.runc.v2"
	I0526 21:25:53.300005  527485 command_runner.go:124] >         [plugins.cri.containerd.default_runtime.options]
	I0526 21:25:53.300010  527485 command_runner.go:124] >           NoPivotRoot = true
	I0526 21:25:53.300015  527485 command_runner.go:124] >       [plugins.cri.containerd.untrusted_workload_runtime]
	I0526 21:25:53.300019  527485 command_runner.go:124] >         runtime_type = ""
	I0526 21:25:53.300024  527485 command_runner.go:124] >         runtime_engine = ""
	I0526 21:25:53.300031  527485 command_runner.go:124] >         runtime_root = ""
	I0526 21:25:53.300035  527485 command_runner.go:124] >     [plugins.cri.cni]
	I0526 21:25:53.300040  527485 command_runner.go:124] >       bin_dir = "/opt/cni/bin"
	I0526 21:25:53.300045  527485 command_runner.go:124] >       conf_dir = "/etc/cni/net.mk"
	I0526 21:25:53.300049  527485 command_runner.go:124] >       conf_template = ""
	I0526 21:25:53.300053  527485 command_runner.go:124] >     [plugins.cri.registry]
	I0526 21:25:53.300058  527485 command_runner.go:124] >       [plugins.cri.registry.mirrors]
	I0526 21:25:53.300063  527485 command_runner.go:124] >         [plugins.cri.registry.mirrors."docker.io"]
	I0526 21:25:53.300071  527485 command_runner.go:124] >           endpoint = ["https://registry-1.docker.io"]
	I0526 21:25:53.300075  527485 command_runner.go:124] >         [plugins.diff-service]
	I0526 21:25:53.300081  527485 command_runner.go:124] >     default = ["walking"]
	I0526 21:25:53.300087  527485 command_runner.go:124] >   [plugins.scheduler]
	I0526 21:25:53.300091  527485 command_runner.go:124] >     pause_threshold = 0.02
	I0526 21:25:53.300095  527485 command_runner.go:124] >     deletion_threshold = 0
	I0526 21:25:53.300100  527485 command_runner.go:124] >     mutation_threshold = 100
	I0526 21:25:53.300104  527485 command_runner.go:124] >     schedule_delay = "0s"
	I0526 21:25:53.300110  527485 command_runner.go:124] >     startup_delay = "100ms"
	I0526 21:25:53.300223  527485 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0526 21:25:53.307622  527485 command_runner.go:124] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0526 21:25:53.307992  527485 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0526 21:25:53.308044  527485 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0526 21:25:53.330620  527485 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0526 21:25:53.339265  527485 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0526 21:25:53.486532  527485 ssh_runner.go:149] Run: sudo systemctl restart containerd
	I0526 21:25:57.835864  527485 ssh_runner.go:189] Completed: sudo systemctl restart containerd: (4.349289401s)
	I0526 21:25:57.835899  527485 start.go:376] Will wait 60s for socket path /run/containerd/containerd.sock
	I0526 21:25:57.835961  527485 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0526 21:25:57.844201  527485 command_runner.go:124] ! stat: cannot stat '/run/containerd/containerd.sock': No such file or directory
	I0526 21:25:57.844555  527485 retry.go:31] will retry after 1.440509088s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/run/containerd/containerd.sock': No such file or directory
	I0526 21:25:59.285247  527485 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0526 21:25:59.290171  527485 command_runner.go:124] >   File: /run/containerd/containerd.sock
	I0526 21:25:59.290197  527485 command_runner.go:124] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0526 21:25:59.290206  527485 command_runner.go:124] > Device: 14h/20d	Inode: 30867       Links: 1
	I0526 21:25:59.290217  527485 command_runner.go:124] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0526 21:25:59.290236  527485 command_runner.go:124] > Access: 2021-05-26 21:25:57.890195203 +0000
	I0526 21:25:59.290244  527485 command_runner.go:124] > Modify: 2021-05-26 21:25:57.890195203 +0000
	I0526 21:25:59.290253  527485 command_runner.go:124] > Change: 2021-05-26 21:25:57.890195203 +0000
	I0526 21:25:59.290259  527485 command_runner.go:124] >  Birth: -
	I0526 21:25:59.290557  527485 start.go:401] Will wait 60s for crictl version
	I0526 21:25:59.290617  527485 ssh_runner.go:149] Run: sudo crictl version
	I0526 21:25:59.312397  527485 command_runner.go:124] > Version:  0.1.0
	I0526 21:25:59.312778  527485 command_runner.go:124] > RuntimeName:  containerd
	I0526 21:25:59.312798  527485 command_runner.go:124] > RuntimeVersion:  v1.4.4
	I0526 21:25:59.312806  527485 command_runner.go:124] > RuntimeApiVersion:  v1alpha2
	I0526 21:25:59.313914  527485 start.go:410] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.4.4
	RuntimeApiVersion:  v1alpha2
	I0526 21:25:59.313963  527485 ssh_runner.go:149] Run: containerd --version
	I0526 21:25:59.351203  527485 command_runner.go:124] > containerd github.com/containerd/containerd v1.4.4 05f951a3781f4f2c1911b05e61c160e9c30eaa8e
	I0526 21:25:59.353503  527485 out.go:170] * Preparing Kubernetes v1.20.2 on containerd 1.4.4 ...
	I0526 21:25:59.355074  527485 out.go:170]   - env NO_PROXY=192.168.39.229
	I0526 21:25:59.355139  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Calling .GetIP
	I0526 21:25:59.360335  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | domain multinode-20210526212238-510955-m02 has defined MAC address 52:54:00:9f:f1:a0 in network mk-multinode-20210526212238-510955
	I0526 21:25:59.360736  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:f1:a0", ip: ""} in network mk-multinode-20210526212238-510955: {Iface:virbr1 ExpiryTime:2021-05-26 22:25:34 +0000 UTC Type:0 Mac:52:54:00:9f:f1:a0 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:multinode-20210526212238-510955-m02 Clientid:01:52:54:00:9f:f1:a0}
	I0526 21:25:59.360779  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | domain multinode-20210526212238-510955-m02 has defined IP address 192.168.39.87 and MAC address 52:54:00:9f:f1:a0 in network mk-multinode-20210526212238-510955
	I0526 21:25:59.360938  527485 ssh_runner.go:149] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0526 21:25:59.364826  527485 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0526 21:25:59.375106  527485 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/multinode-20210526212238-510955 for IP: 192.168.39.87
	I0526 21:25:59.375153  527485 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/ca.key
	I0526 21:25:59.375169  527485 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/proxy-client-ca.key
	I0526 21:25:59.375182  527485 vm_assets.go:98] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0526 21:25:59.375194  527485 vm_assets.go:98] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0526 21:25:59.375205  527485 vm_assets.go:98] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0526 21:25:59.375216  527485 vm_assets.go:98] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0526 21:25:59.375268  527485 certs.go:369] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/510955.pem (1338 bytes)
	W0526 21:25:59.375312  527485 certs.go:365] ignoring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/510955_empty.pem, impossibly tiny 0 bytes
	I0526 21:25:59.375330  527485 certs.go:369] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/ca-key.pem (1675 bytes)
	I0526 21:25:59.375356  527485 certs.go:369] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/ca.pem (1078 bytes)
	I0526 21:25:59.375379  527485 certs.go:369] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/cert.pem (1123 bytes)
	I0526 21:25:59.375401  527485 certs.go:369] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/key.pem (1679 bytes)
	I0526 21:25:59.375427  527485 vm_assets.go:98] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0526 21:25:59.375456  527485 vm_assets.go:98] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/510955.pem -> /usr/share/ca-certificates/510955.pem
	I0526 21:25:59.375838  527485 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0526 21:25:59.392253  527485 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0526 21:25:59.407698  527485 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0526 21:25:59.423260  527485 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0526 21:25:59.439266  527485 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0526 21:25:59.454821  527485 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/510955.pem --> /usr/share/ca-certificates/510955.pem (1338 bytes)
	I0526 21:25:59.470919  527485 ssh_runner.go:149] Run: openssl version
	I0526 21:25:59.476270  527485 command_runner.go:124] > OpenSSL 1.1.1k  25 Mar 2021
	I0526 21:25:59.476758  527485 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0526 21:25:59.484115  527485 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0526 21:25:59.488098  527485 command_runner.go:124] > -rw-r--r-- 1 root root 1111 May 26 20:40 /usr/share/ca-certificates/minikubeCA.pem
	I0526 21:25:59.488277  527485 certs.go:410] hashing: -rw-r--r-- 1 root root 1111 May 26 20:40 /usr/share/ca-certificates/minikubeCA.pem
	I0526 21:25:59.488330  527485 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0526 21:25:59.493735  527485 command_runner.go:124] > b5213941
	I0526 21:25:59.494057  527485 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0526 21:25:59.501464  527485 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/510955.pem && ln -fs /usr/share/ca-certificates/510955.pem /etc/ssl/certs/510955.pem"
	I0526 21:25:59.509156  527485 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/510955.pem
	I0526 21:25:59.513315  527485 command_runner.go:124] > -rw-r--r-- 1 root root 1338 May 26 21:12 /usr/share/ca-certificates/510955.pem
	I0526 21:25:59.513663  527485 certs.go:410] hashing: -rw-r--r-- 1 root root 1338 May 26 21:12 /usr/share/ca-certificates/510955.pem
	I0526 21:25:59.513696  527485 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/510955.pem
	I0526 21:25:59.519863  527485 command_runner.go:124] > 51391683
	I0526 21:25:59.520201  527485 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/510955.pem /etc/ssl/certs/51391683.0"
	I0526 21:25:59.527724  527485 ssh_runner.go:149] Run: sudo crictl info
	I0526 21:25:59.551101  527485 command_runner.go:124] > {
	I0526 21:25:59.551120  527485 command_runner.go:124] >   "status": {
	I0526 21:25:59.551127  527485 command_runner.go:124] >     "conditions": [
	I0526 21:25:59.551132  527485 command_runner.go:124] >       {
	I0526 21:25:59.551139  527485 command_runner.go:124] >         "type": "RuntimeReady",
	I0526 21:25:59.551148  527485 command_runner.go:124] >         "status": true,
	I0526 21:25:59.551154  527485 command_runner.go:124] >         "reason": "",
	I0526 21:25:59.551161  527485 command_runner.go:124] >         "message": ""
	I0526 21:25:59.551168  527485 command_runner.go:124] >       },
	I0526 21:25:59.551173  527485 command_runner.go:124] >       {
	I0526 21:25:59.551180  527485 command_runner.go:124] >         "type": "NetworkReady",
	I0526 21:25:59.551190  527485 command_runner.go:124] >         "status": false,
	I0526 21:25:59.551201  527485 command_runner.go:124] >         "reason": "NetworkPluginNotReady",
	I0526 21:25:59.551212  527485 command_runner.go:124] >         "message": "Network plugin returns error: cni plugin not initialized"
	I0526 21:25:59.551226  527485 command_runner.go:124] >       }
	I0526 21:25:59.551231  527485 command_runner.go:124] >     ]
	I0526 21:25:59.551235  527485 command_runner.go:124] >   },
	I0526 21:25:59.551240  527485 command_runner.go:124] >   "cniconfig": {
	I0526 21:25:59.551245  527485 command_runner.go:124] >     "PluginDirs": [
	I0526 21:25:59.551250  527485 command_runner.go:124] >       "/opt/cni/bin"
	I0526 21:25:59.551254  527485 command_runner.go:124] >     ],
	I0526 21:25:59.551260  527485 command_runner.go:124] >     "PluginConfDir": "/etc/cni/net.mk",
	I0526 21:25:59.551269  527485 command_runner.go:124] >     "PluginMaxConfNum": 1,
	I0526 21:25:59.551274  527485 command_runner.go:124] >     "Prefix": "eth",
	I0526 21:25:59.551282  527485 command_runner.go:124] >     "Networks": [
	I0526 21:25:59.551288  527485 command_runner.go:124] >       {
	I0526 21:25:59.551294  527485 command_runner.go:124] >         "Config": {
	I0526 21:25:59.551308  527485 command_runner.go:124] >           "Name": "cni-loopback",
	I0526 21:25:59.551318  527485 command_runner.go:124] >           "CNIVersion": "0.3.1",
	I0526 21:25:59.551325  527485 command_runner.go:124] >           "Plugins": [
	I0526 21:25:59.551331  527485 command_runner.go:124] >             {
	I0526 21:25:59.551337  527485 command_runner.go:124] >               "Network": {
	I0526 21:25:59.551349  527485 command_runner.go:124] >                 "type": "loopback",
	I0526 21:25:59.551358  527485 command_runner.go:124] >                 "ipam": {},
	I0526 21:25:59.551366  527485 command_runner.go:124] >                 "dns": {}
	I0526 21:25:59.551373  527485 command_runner.go:124] >               },
	I0526 21:25:59.551381  527485 command_runner.go:124] >               "Source": "{\"type\":\"loopback\"}"
	I0526 21:25:59.551390  527485 command_runner.go:124] >             }
	I0526 21:25:59.551396  527485 command_runner.go:124] >           ],
	I0526 21:25:59.551411  527485 command_runner.go:124] >           "Source": "{\n\"cniVersion\": \"0.3.1\",\n\"name\": \"cni-loopback\",\n\"plugins\": [{\n  \"type\": \"loopback\"\n}]\n}"
	I0526 21:25:59.551419  527485 command_runner.go:124] >         },
	I0526 21:25:59.551426  527485 command_runner.go:124] >         "IFName": "lo"
	I0526 21:25:59.551434  527485 command_runner.go:124] >       }
	I0526 21:25:59.551439  527485 command_runner.go:124] >     ]
	I0526 21:25:59.551446  527485 command_runner.go:124] >   },
	I0526 21:25:59.551452  527485 command_runner.go:124] >   "config": {
	I0526 21:25:59.551465  527485 command_runner.go:124] >     "containerd": {
	I0526 21:25:59.551473  527485 command_runner.go:124] >       "snapshotter": "overlayfs",
	I0526 21:25:59.551481  527485 command_runner.go:124] >       "defaultRuntimeName": "default",
	I0526 21:25:59.551491  527485 command_runner.go:124] >       "defaultRuntime": {
	I0526 21:25:59.551499  527485 command_runner.go:124] >         "runtimeType": "io.containerd.runc.v2",
	I0526 21:25:59.551509  527485 command_runner.go:124] >         "runtimeEngine": "",
	I0526 21:25:59.551517  527485 command_runner.go:124] >         "PodAnnotations": null,
	I0526 21:25:59.551527  527485 command_runner.go:124] >         "ContainerAnnotations": null,
	I0526 21:25:59.551535  527485 command_runner.go:124] >         "runtimeRoot": "",
	I0526 21:25:59.551544  527485 command_runner.go:124] >         "options": {},
	I0526 21:25:59.551553  527485 command_runner.go:124] >         "privileged_without_host_devices": false,
	I0526 21:25:59.551563  527485 command_runner.go:124] >         "baseRuntimeSpec": ""
	I0526 21:25:59.551569  527485 command_runner.go:124] >       },
	I0526 21:25:59.551578  527485 command_runner.go:124] >       "untrustedWorkloadRuntime": {
	I0526 21:25:59.551585  527485 command_runner.go:124] >         "runtimeType": "",
	I0526 21:25:59.551593  527485 command_runner.go:124] >         "runtimeEngine": "",
	I0526 21:25:59.551600  527485 command_runner.go:124] >         "PodAnnotations": null,
	I0526 21:25:59.551611  527485 command_runner.go:124] >         "ContainerAnnotations": null,
	I0526 21:25:59.551619  527485 command_runner.go:124] >         "runtimeRoot": "",
	I0526 21:25:59.551627  527485 command_runner.go:124] >         "options": null,
	I0526 21:25:59.551636  527485 command_runner.go:124] >         "privileged_without_host_devices": false,
	I0526 21:25:59.551646  527485 command_runner.go:124] >         "baseRuntimeSpec": ""
	I0526 21:25:59.551652  527485 command_runner.go:124] >       },
	I0526 21:25:59.551660  527485 command_runner.go:124] >       "runtimes": {
	I0526 21:25:59.551666  527485 command_runner.go:124] >         "default": {
	I0526 21:25:59.551675  527485 command_runner.go:124] >           "runtimeType": "io.containerd.runc.v2",
	I0526 21:25:59.551685  527485 command_runner.go:124] >           "runtimeEngine": "",
	I0526 21:25:59.551692  527485 command_runner.go:124] >           "PodAnnotations": null,
	I0526 21:25:59.551705  527485 command_runner.go:124] >           "ContainerAnnotations": null,
	I0526 21:25:59.551713  527485 command_runner.go:124] >           "runtimeRoot": "",
	I0526 21:25:59.551720  527485 command_runner.go:124] >           "options": {},
	I0526 21:25:59.551731  527485 command_runner.go:124] >           "privileged_without_host_devices": false,
	I0526 21:25:59.551740  527485 command_runner.go:124] >           "baseRuntimeSpec": ""
	I0526 21:25:59.551748  527485 command_runner.go:124] >         },
	I0526 21:25:59.551754  527485 command_runner.go:124] >         "runc": {
	I0526 21:25:59.551764  527485 command_runner.go:124] >           "runtimeType": "io.containerd.runc.v2",
	I0526 21:25:59.551771  527485 command_runner.go:124] >           "runtimeEngine": "",
	I0526 21:25:59.551782  527485 command_runner.go:124] >           "PodAnnotations": null,
	I0526 21:25:59.551791  527485 command_runner.go:124] >           "ContainerAnnotations": null,
	I0526 21:25:59.551800  527485 command_runner.go:124] >           "runtimeRoot": "",
	I0526 21:25:59.551807  527485 command_runner.go:124] >           "options": {},
	I0526 21:25:59.551818  527485 command_runner.go:124] >           "privileged_without_host_devices": false,
	I0526 21:25:59.551826  527485 command_runner.go:124] >           "baseRuntimeSpec": ""
	I0526 21:25:59.551834  527485 command_runner.go:124] >         }
	I0526 21:25:59.551841  527485 command_runner.go:124] >       },
	I0526 21:25:59.551848  527485 command_runner.go:124] >       "noPivot": false,
	I0526 21:25:59.551856  527485 command_runner.go:124] >       "disableSnapshotAnnotations": true,
	I0526 21:25:59.551869  527485 command_runner.go:124] >       "discardUnpackedLayers": false
	I0526 21:25:59.551875  527485 command_runner.go:124] >     },
	I0526 21:25:59.551881  527485 command_runner.go:124] >     "cni": {
	I0526 21:25:59.551888  527485 command_runner.go:124] >       "binDir": "/opt/cni/bin",
	I0526 21:25:59.551896  527485 command_runner.go:124] >       "confDir": "/etc/cni/net.mk",
	I0526 21:25:59.551902  527485 command_runner.go:124] >       "maxConfNum": 1,
	I0526 21:25:59.551910  527485 command_runner.go:124] >       "confTemplate": ""
	I0526 21:25:59.551915  527485 command_runner.go:124] >     },
	I0526 21:25:59.551922  527485 command_runner.go:124] >     "registry": {
	I0526 21:25:59.551928  527485 command_runner.go:124] >       "mirrors": {
	I0526 21:25:59.551935  527485 command_runner.go:124] >         "docker.io": {
	I0526 21:25:59.551941  527485 command_runner.go:124] >           "endpoint": [
	I0526 21:25:59.551952  527485 command_runner.go:124] >             "https://registry-1.docker.io"
	I0526 21:25:59.551958  527485 command_runner.go:124] >           ]
	I0526 21:25:59.551965  527485 command_runner.go:124] >         }
	I0526 21:25:59.551970  527485 command_runner.go:124] >       },
	I0526 21:25:59.551976  527485 command_runner.go:124] >       "configs": null,
	I0526 21:25:59.551983  527485 command_runner.go:124] >       "auths": null,
	I0526 21:25:59.551989  527485 command_runner.go:124] >       "headers": null
	I0526 21:25:59.551996  527485 command_runner.go:124] >     },
	I0526 21:25:59.552002  527485 command_runner.go:124] >     "imageDecryption": {
	I0526 21:25:59.552012  527485 command_runner.go:124] >       "keyModel": ""
	I0526 21:25:59.552018  527485 command_runner.go:124] >     },
	I0526 21:25:59.552026  527485 command_runner.go:124] >     "disableTCPService": true,
	I0526 21:25:59.552033  527485 command_runner.go:124] >     "streamServerAddress": "",
	I0526 21:25:59.552042  527485 command_runner.go:124] >     "streamServerPort": "10010",
	I0526 21:25:59.552051  527485 command_runner.go:124] >     "streamIdleTimeout": "4h0m0s",
	I0526 21:25:59.552058  527485 command_runner.go:124] >     "enableSelinux": false,
	I0526 21:25:59.552068  527485 command_runner.go:124] >     "selinuxCategoryRange": 1024,
	I0526 21:25:59.552076  527485 command_runner.go:124] >     "sandboxImage": "k8s.gcr.io/pause:3.2",
	I0526 21:25:59.552085  527485 command_runner.go:124] >     "statsCollectPeriod": 10,
	I0526 21:25:59.552092  527485 command_runner.go:124] >     "systemdCgroup": false,
	I0526 21:25:59.552100  527485 command_runner.go:124] >     "enableTLSStreaming": false,
	I0526 21:25:59.552106  527485 command_runner.go:124] >     "x509KeyPairStreaming": {
	I0526 21:25:59.552114  527485 command_runner.go:124] >       "tlsCertFile": "",
	I0526 21:25:59.552120  527485 command_runner.go:124] >       "tlsKeyFile": ""
	I0526 21:25:59.552126  527485 command_runner.go:124] >     },
	I0526 21:25:59.552133  527485 command_runner.go:124] >     "maxContainerLogSize": 16384,
	I0526 21:25:59.552143  527485 command_runner.go:124] >     "disableCgroup": false,
	I0526 21:25:59.552151  527485 command_runner.go:124] >     "disableApparmor": false,
	I0526 21:25:59.552158  527485 command_runner.go:124] >     "restrictOOMScoreAdj": false,
	I0526 21:25:59.552167  527485 command_runner.go:124] >     "maxConcurrentDownloads": 3,
	I0526 21:25:59.552174  527485 command_runner.go:124] >     "disableProcMount": false,
	I0526 21:25:59.552182  527485 command_runner.go:124] >     "unsetSeccompProfile": "",
	I0526 21:25:59.552189  527485 command_runner.go:124] >     "tolerateMissingHugetlbController": true,
	I0526 21:25:59.552199  527485 command_runner.go:124] >     "disableHugetlbController": true,
	I0526 21:25:59.552207  527485 command_runner.go:124] >     "ignoreImageDefinedVolumes": false,
	I0526 21:25:59.552218  527485 command_runner.go:124] >     "containerdRootDir": "/mnt/vda1/var/lib/containerd",
	I0526 21:25:59.552229  527485 command_runner.go:124] >     "containerdEndpoint": "/run/containerd/containerd.sock",
	I0526 21:25:59.552240  527485 command_runner.go:124] >     "rootDir": "/mnt/vda1/var/lib/containerd/io.containerd.grpc.v1.cri",
	I0526 21:25:59.552252  527485 command_runner.go:124] >     "stateDir": "/run/containerd/io.containerd.grpc.v1.cri"
	I0526 21:25:59.552258  527485 command_runner.go:124] >   },
	I0526 21:25:59.552264  527485 command_runner.go:124] >   "golang": "go1.13.15",
	I0526 21:25:59.552324  527485 command_runner.go:124] >   "lastCNILoadStatus": "cni config load failed: no network config found in /etc/cni/net.mk: cni plugin not initialized: failed to load cni config"
	I0526 21:25:59.552334  527485 command_runner.go:124] > }
	I0526 21:25:59.552961  527485 cni.go:93] Creating CNI manager for ""
	I0526 21:25:59.552978  527485 cni.go:154] 2 nodes found, recommending kindnet
	I0526 21:25:59.552991  527485 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0526 21:25:59.553008  527485 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.87 APIServerPort:8443 KubernetesVersion:v1.20.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-20210526212238-510955 NodeName:multinode-20210526212238-510955-m02 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.229"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.39.87 CgroupDriver:cgroupfs
ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0526 21:25:59.553130  527485 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.87
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "multinode-20210526212238-510955-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.87
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.229"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	
	I0526 21:25:59.553214  527485 kubeadm.go:909] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=multinode-20210526212238-510955-m02 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.87 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.2 ClusterName:multinode-20210526212238-510955 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0526 21:25:59.553264  527485 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.20.2
	I0526 21:25:59.560597  527485 command_runner.go:124] > kubeadm
	I0526 21:25:59.560615  527485 command_runner.go:124] > kubectl
	I0526 21:25:59.560620  527485 command_runner.go:124] > kubelet
	I0526 21:25:59.560934  527485 binaries.go:44] Found k8s binaries, skipping transfer
	I0526 21:25:59.560975  527485 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0526 21:25:59.567380  527485 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (581 bytes)
	I0526 21:25:59.578931  527485 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0526 21:25:59.590340  527485 ssh_runner.go:149] Run: grep 192.168.39.229	control-plane.minikube.internal$ /etc/hosts
	I0526 21:25:59.594107  527485 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.229	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0526 21:25:59.604129  527485 host.go:66] Checking if "multinode-20210526212238-510955" exists ...
	I0526 21:25:59.604413  527485 cache.go:108] acquiring lock: {Name:mk0fbd6526c48f14b253d250dd93663316e68dc6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0526 21:25:59.604550  527485 main.go:128] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0526 21:25:59.604588  527485 main.go:128] libmachine: Launching plugin server for driver kvm2
	I0526 21:25:59.604557  527485 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/minikube-local-cache-test_functional-20210526211257-510955 exists
	I0526 21:25:59.604679  527485 cache.go:97] cache image "minikube-local-cache-test:functional-20210526211257-510955" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/minikube-local-cache-test_functional-20210526211257-510955" took 272.85µs
	I0526 21:25:59.604703  527485 cache.go:81] save to tar file minikube-local-cache-test:functional-20210526211257-510955 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/minikube-local-cache-test_functional-20210526211257-510955 succeeded
	I0526 21:25:59.604717  527485 cache.go:88] Successfully saved all images to host disk.
	I0526 21:25:59.605163  527485 main.go:128] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0526 21:25:59.605206  527485 main.go:128] libmachine: Launching plugin server for driver kvm2
	I0526 21:25:59.616711  527485 main.go:128] libmachine: Plugin server listening at address 127.0.0.1:42125
	I0526 21:25:59.617147  527485 main.go:128] libmachine: () Calling .GetVersion
	I0526 21:25:59.617659  527485 main.go:128] libmachine: Using API Version  1
	I0526 21:25:59.617683  527485 main.go:128] libmachine: () Calling .SetConfigRaw
	I0526 21:25:59.618111  527485 main.go:128] libmachine: () Calling .GetMachineName
	I0526 21:25:59.618296  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .DriverName
	I0526 21:25:59.618446  527485 start.go:224] JoinCluster: &{Name:multinode-20210526212238-510955 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.20.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.22-1620785771-11384@sha256:f5844fe35994179bbad8dda27d4912304a2fedccdf0bf93ce8b2ec2b3b83af1c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:multinode-20210526
212238-510955 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.229 Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.87 Port:0 KubernetesVersion:v1.20.2 ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:true}
	I0526 21:25:59.618550  527485 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm token create --print-join-command --ttl=0"
	I0526 21:25:59.618568  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetSSHHostname
	I0526 21:25:59.620352  527485 main.go:128] libmachine: Plugin server listening at address 127.0.0.1:38883
	I0526 21:25:59.620740  527485 main.go:128] libmachine: () Calling .GetVersion
	I0526 21:25:59.621147  527485 main.go:128] libmachine: Using API Version  1
	I0526 21:25:59.621170  527485 main.go:128] libmachine: () Calling .SetConfigRaw
	I0526 21:25:59.621478  527485 main.go:128] libmachine: () Calling .GetMachineName
	I0526 21:25:59.621674  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetState
	I0526 21:25:59.624725  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | domain multinode-20210526212238-510955 has defined MAC address 52:54:00:0c:8b:34 in network mk-multinode-20210526212238-510955
	I0526 21:25:59.625091  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:8b:34", ip: ""} in network mk-multinode-20210526212238-510955: {Iface:virbr1 ExpiryTime:2021-05-26 22:22:53 +0000 UTC Type:0 Mac:52:54:00:0c:8b:34 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:multinode-20210526212238-510955 Clientid:01:52:54:00:0c:8b:34}
	I0526 21:25:59.625127  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | domain multinode-20210526212238-510955 has defined IP address 192.168.39.229 and MAC address 52:54:00:0c:8b:34 in network mk-multinode-20210526212238-510955
	I0526 21:25:59.625241  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetSSHPort
	I0526 21:25:59.625423  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetSSHKeyPath
	I0526 21:25:59.625584  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetSSHUsername
	I0526 21:25:59.625735  527485 sshutil.go:53] new ssh client: &{IP:192.168.39.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/multinode-20210526212238-510955/id_rsa Username:docker}
	I0526 21:25:59.625912  527485 main.go:128] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0526 21:25:59.625956  527485 main.go:128] libmachine: Launching plugin server for driver kvm2
	I0526 21:25:59.636036  527485 main.go:128] libmachine: Plugin server listening at address 127.0.0.1:39531
	I0526 21:25:59.636433  527485 main.go:128] libmachine: () Calling .GetVersion
	I0526 21:25:59.636888  527485 main.go:128] libmachine: Using API Version  1
	I0526 21:25:59.636914  527485 main.go:128] libmachine: () Calling .SetConfigRaw
	I0526 21:25:59.637199  527485 main.go:128] libmachine: () Calling .GetMachineName
	I0526 21:25:59.637348  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .DriverName
	I0526 21:25:59.637531  527485 ssh_runner.go:149] Run: sudo crictl images --output json
	I0526 21:25:59.637555  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetSSHHostname
	I0526 21:25:59.642278  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | domain multinode-20210526212238-510955 has defined MAC address 52:54:00:0c:8b:34 in network mk-multinode-20210526212238-510955
	I0526 21:25:59.642595  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:8b:34", ip: ""} in network mk-multinode-20210526212238-510955: {Iface:virbr1 ExpiryTime:2021-05-26 22:22:53 +0000 UTC Type:0 Mac:52:54:00:0c:8b:34 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:multinode-20210526212238-510955 Clientid:01:52:54:00:0c:8b:34}
	I0526 21:25:59.642624  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | domain multinode-20210526212238-510955 has defined IP address 192.168.39.229 and MAC address 52:54:00:0c:8b:34 in network mk-multinode-20210526212238-510955
	I0526 21:25:59.642765  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetSSHPort
	I0526 21:25:59.642942  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetSSHKeyPath
	I0526 21:25:59.643094  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetSSHUsername
	I0526 21:25:59.643231  527485 sshutil.go:53] new ssh client: &{IP:192.168.39.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/multinode-20210526212238-510955/id_rsa Username:docker}
	I0526 21:25:59.864815  527485 command_runner.go:124] > kubeadm join control-plane.minikube.internal:8443 --token ch1ot4.9etgzhm4zh9wn897     --discovery-token-ca-cert-hash sha256:12858510f46d14420576d9acdde7779529e8255fb2d74cf18105715622c3cace 
	I0526 21:25:59.866688  527485 command_runner.go:124] > {
	I0526 21:25:59.866706  527485 command_runner.go:124] >   "images": [
	I0526 21:25:59.866710  527485 command_runner.go:124] >     {
	I0526 21:25:59.866722  527485 command_runner.go:124] >       "id": "sha256:6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb",
	I0526 21:25:59.866731  527485 command_runner.go:124] >       "repoTags": [
	I0526 21:25:59.866741  527485 command_runner.go:124] >         "docker.io/kindest/kindnetd:v20210326-1e038dc5"
	I0526 21:25:59.866766  527485 command_runner.go:124] >       ],
	I0526 21:25:59.866773  527485 command_runner.go:124] >       "repoDigests": [
	I0526 21:25:59.866789  527485 command_runner.go:124] >         "docker.io/kindest/kindnetd@sha256:838bc1706e38391aefaa31fd52619fe8e57ad3dfb0d0ff414d902367fcc24c3c"
	I0526 21:25:59.866800  527485 command_runner.go:124] >       ],
	I0526 21:25:59.866807  527485 command_runner.go:124] >       "size": "53960776",
	I0526 21:25:59.866813  527485 command_runner.go:124] >       "uid": null,
	I0526 21:25:59.866818  527485 command_runner.go:124] >       "username": "",
	I0526 21:25:59.866825  527485 command_runner.go:124] >       "spec": null
	I0526 21:25:59.866831  527485 command_runner.go:124] >     },
	I0526 21:25:59.866837  527485 command_runner.go:124] >     {
	I0526 21:25:59.866852  527485 command_runner.go:124] >       "id": "sha256:9a07b5b4bfac07e5cfc27f76c34516a3ad2fdfa3f683f375141fe662ef2e72db",
	I0526 21:25:59.866861  527485 command_runner.go:124] >       "repoTags": [
	I0526 21:25:59.866870  527485 command_runner.go:124] >         "docker.io/kubernetesui/dashboard:v2.1.0"
	I0526 21:25:59.866879  527485 command_runner.go:124] >       ],
	I0526 21:25:59.866885  527485 command_runner.go:124] >       "repoDigests": [
	I0526 21:25:59.866899  527485 command_runner.go:124] >         "docker.io/kubernetesui/dashboard@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6"
	I0526 21:25:59.866907  527485 command_runner.go:124] >       ],
	I0526 21:25:59.866914  527485 command_runner.go:124] >       "size": "67992170",
	I0526 21:25:59.866922  527485 command_runner.go:124] >       "uid": null,
	I0526 21:25:59.866932  527485 command_runner.go:124] >       "username": "nonroot",
	I0526 21:25:59.866921  527485 start.go:245] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.39.87 Port:0 KubernetesVersion:v1.20.2 ControlPlane:false Worker:true}
	I0526 21:25:59.866961  527485 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm join control-plane.minikube.internal:8443 --token ch1ot4.9etgzhm4zh9wn897     --discovery-token-ca-cert-hash sha256:12858510f46d14420576d9acdde7779529e8255fb2d74cf18105715622c3cace --ignore-preflight-errors=all --cri-socket /run/containerd/containerd.sock --node-name=multinode-20210526212238-510955-m02"
	I0526 21:25:59.866939  527485 command_runner.go:124] >       "spec": null
	I0526 21:25:59.867016  527485 command_runner.go:124] >     },
	I0526 21:25:59.867021  527485 command_runner.go:124] >     {
	I0526 21:25:59.867029  527485 command_runner.go:124] >       "id": "sha256:86262685d9abb35698a4e03ed13f9ded5b97c6c85b466285e4f367e5232eeee4",
	I0526 21:25:59.867033  527485 command_runner.go:124] >       "repoTags": [
	I0526 21:25:59.867041  527485 command_runner.go:124] >         "docker.io/kubernetesui/metrics-scraper:v1.0.4"
	I0526 21:25:59.867047  527485 command_runner.go:124] >       ],
	I0526 21:25:59.867060  527485 command_runner.go:124] >       "repoDigests": [
	I0526 21:25:59.867074  527485 command_runner.go:124] >         "docker.io/kubernetesui/metrics-scraper@sha256:555981a24f184420f3be0c79d4efb6c948a85cfce84034f85a563f4151a81cbf"
	I0526 21:25:59.867090  527485 command_runner.go:124] >       ],
	I0526 21:25:59.867097  527485 command_runner.go:124] >       "size": "16020077",
	I0526 21:25:59.867106  527485 command_runner.go:124] >       "uid": null,
	I0526 21:25:59.867112  527485 command_runner.go:124] >       "username": "",
	I0526 21:25:59.867123  527485 command_runner.go:124] >       "spec": null
	I0526 21:25:59.867128  527485 command_runner.go:124] >     },
	I0526 21:25:59.867134  527485 command_runner.go:124] >     {
	I0526 21:25:59.867145  527485 command_runner.go:124] >       "id": "sha256:d019ff3187ef5660d1df17b8caf469d5fc50b72267134348e040397c4d49d830",
	I0526 21:25:59.867155  527485 command_runner.go:124] >       "repoTags": [
	I0526 21:25:59.867166  527485 command_runner.go:124] >         "docker.io/library/minikube-local-cache-test:functional-20210526211257-510955"
	I0526 21:25:59.867174  527485 command_runner.go:124] >       ],
	I0526 21:25:59.867181  527485 command_runner.go:124] >       "repoDigests": [
	I0526 21:25:59.867189  527485 command_runner.go:124] >       ],
	I0526 21:25:59.867196  527485 command_runner.go:124] >       "size": "1737",
	I0526 21:25:59.867205  527485 command_runner.go:124] >       "uid": null,
	I0526 21:25:59.867211  527485 command_runner.go:124] >       "username": "",
	I0526 21:25:59.867218  527485 command_runner.go:124] >       "spec": null
	I0526 21:25:59.867224  527485 command_runner.go:124] >     },
	I0526 21:25:59.867228  527485 command_runner.go:124] >     {
	I0526 21:25:59.867236  527485 command_runner.go:124] >       "id": "sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0526 21:25:59.867243  527485 command_runner.go:124] >       "repoTags": [
	I0526 21:25:59.867251  527485 command_runner.go:124] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0526 21:25:59.867260  527485 command_runner.go:124] >       ],
	I0526 21:25:59.867266  527485 command_runner.go:124] >       "repoDigests": [
	I0526 21:25:59.867279  527485 command_runner.go:124] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I0526 21:25:59.867295  527485 command_runner.go:124] >       ],
	I0526 21:25:59.867303  527485 command_runner.go:124] >       "size": "9058936",
	I0526 21:25:59.867312  527485 command_runner.go:124] >       "uid": null,
	I0526 21:25:59.867318  527485 command_runner.go:124] >       "username": "",
	I0526 21:25:59.867327  527485 command_runner.go:124] >       "spec": null
	I0526 21:25:59.867340  527485 command_runner.go:124] >     },
	I0526 21:25:59.867350  527485 command_runner.go:124] >     {
	I0526 21:25:59.867367  527485 command_runner.go:124] >       "id": "sha256:bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16",
	I0526 21:25:59.867377  527485 command_runner.go:124] >       "repoTags": [
	I0526 21:25:59.867384  527485 command_runner.go:124] >         "k8s.gcr.io/coredns:1.7.0"
	I0526 21:25:59.867393  527485 command_runner.go:124] >       ],
	I0526 21:25:59.867399  527485 command_runner.go:124] >       "repoDigests": [
	I0526 21:25:59.867414  527485 command_runner.go:124] >         "k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c"
	I0526 21:25:59.867421  527485 command_runner.go:124] >       ],
	I0526 21:25:59.867426  527485 command_runner.go:124] >       "size": "13982350",
	I0526 21:25:59.867434  527485 command_runner.go:124] >       "uid": null,
	I0526 21:25:59.867440  527485 command_runner.go:124] >       "username": "",
	I0526 21:25:59.867448  527485 command_runner.go:124] >       "spec": null
	I0526 21:25:59.867454  527485 command_runner.go:124] >     },
	I0526 21:25:59.867462  527485 command_runner.go:124] >     {
	I0526 21:25:59.867473  527485 command_runner.go:124] >       "id": "sha256:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934",
	I0526 21:25:59.867482  527485 command_runner.go:124] >       "repoTags": [
	I0526 21:25:59.867489  527485 command_runner.go:124] >         "k8s.gcr.io/etcd:3.4.13-0"
	I0526 21:25:59.867498  527485 command_runner.go:124] >       ],
	I0526 21:25:59.867504  527485 command_runner.go:124] >       "repoDigests": [
	I0526 21:25:59.867518  527485 command_runner.go:124] >         "k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2"
	I0526 21:25:59.867522  527485 command_runner.go:124] >       ],
	I0526 21:25:59.867531  527485 command_runner.go:124] >       "size": "86742272",
	I0526 21:25:59.867537  527485 command_runner.go:124] >       "uid": null,
	I0526 21:25:59.867544  527485 command_runner.go:124] >       "username": "",
	I0526 21:25:59.867552  527485 command_runner.go:124] >       "spec": null
	I0526 21:25:59.867556  527485 command_runner.go:124] >     },
	I0526 21:25:59.867561  527485 command_runner.go:124] >     {
	I0526 21:25:59.867573  527485 command_runner.go:124] >       "id": "sha256:a8c2fdb8bf76e3b014d14ce69a6a2d11044cb13b4ec3185015c582b8ad69a820",
	I0526 21:25:59.867580  527485 command_runner.go:124] >       "repoTags": [
	I0526 21:25:59.867587  527485 command_runner.go:124] >         "k8s.gcr.io/kube-apiserver:v1.20.2"
	I0526 21:25:59.867596  527485 command_runner.go:124] >       ],
	I0526 21:25:59.867602  527485 command_runner.go:124] >       "repoDigests": [
	I0526 21:25:59.867614  527485 command_runner.go:124] >         "k8s.gcr.io/kube-apiserver@sha256:465ba895d578fbc1c6e299e45689381fd01c54400beba9e8f1d7456077411411"
	I0526 21:25:59.867622  527485 command_runner.go:124] >       ],
	I0526 21:25:59.867629  527485 command_runner.go:124] >       "size": "30411317",
	I0526 21:25:59.867635  527485 command_runner.go:124] >       "uid": {
	I0526 21:25:59.867641  527485 command_runner.go:124] >         "value": "0"
	I0526 21:25:59.867649  527485 command_runner.go:124] >       },
	I0526 21:25:59.867656  527485 command_runner.go:124] >       "username": "",
	I0526 21:25:59.867664  527485 command_runner.go:124] >       "spec": null
	I0526 21:25:59.867669  527485 command_runner.go:124] >     },
	I0526 21:25:59.867674  527485 command_runner.go:124] >     {
	I0526 21:25:59.867685  527485 command_runner.go:124] >       "id": "sha256:a27166429d98e07152ca71420931142127609f715925b1607acee6ea6f0e3696",
	I0526 21:25:59.867691  527485 command_runner.go:124] >       "repoTags": [
	I0526 21:25:59.867700  527485 command_runner.go:124] >         "k8s.gcr.io/kube-controller-manager:v1.20.2"
	I0526 21:25:59.867706  527485 command_runner.go:124] >       ],
	I0526 21:25:59.867712  527485 command_runner.go:124] >       "repoDigests": [
	I0526 21:25:59.867725  527485 command_runner.go:124] >         "k8s.gcr.io/kube-controller-manager@sha256:842a071d4ad49b0018f7f7404ac8a4ddfc2bce2ce15b3f8131d89563fda36c9b"
	I0526 21:25:59.867730  527485 command_runner.go:124] >       ],
	I0526 21:25:59.867736  527485 command_runner.go:124] >       "size": "29362302",
	I0526 21:25:59.867740  527485 command_runner.go:124] >       "uid": {
	I0526 21:25:59.867746  527485 command_runner.go:124] >         "value": "0"
	I0526 21:25:59.867749  527485 command_runner.go:124] >       },
	I0526 21:25:59.867753  527485 command_runner.go:124] >       "username": "",
	I0526 21:25:59.867757  527485 command_runner.go:124] >       "spec": null
	I0526 21:25:59.867760  527485 command_runner.go:124] >     },
	I0526 21:25:59.867764  527485 command_runner.go:124] >     {
	I0526 21:25:59.867770  527485 command_runner.go:124] >       "id": "sha256:43154ddb57a83de3068fe603e9c7393e7d2b77cb18d9e0daf869f74b1b4079c0",
	I0526 21:25:59.867775  527485 command_runner.go:124] >       "repoTags": [
	I0526 21:25:59.867779  527485 command_runner.go:124] >         "k8s.gcr.io/kube-proxy:v1.20.2"
	I0526 21:25:59.867782  527485 command_runner.go:124] >       ],
	I0526 21:25:59.867786  527485 command_runner.go:124] >       "repoDigests": [
	I0526 21:25:59.867793  527485 command_runner.go:124] >         "k8s.gcr.io/kube-proxy@sha256:326fe8a4508a5db91cf234c4867eff5ba458bc4107c2a7e15c827a74faa19be9"
	I0526 21:25:59.867796  527485 command_runner.go:124] >       ],
	I0526 21:25:59.867800  527485 command_runner.go:124] >       "size": "49539606",
	I0526 21:25:59.867804  527485 command_runner.go:124] >       "uid": null,
	I0526 21:25:59.867808  527485 command_runner.go:124] >       "username": "",
	I0526 21:25:59.867812  527485 command_runner.go:124] >       "spec": null
	I0526 21:25:59.867815  527485 command_runner.go:124] >     },
	I0526 21:25:59.867818  527485 command_runner.go:124] >     {
	I0526 21:25:59.867828  527485 command_runner.go:124] >       "id": "sha256:ed2c44fbdd78b69a0981ab3c57ebce2798e4a4b2b5dda2fabc720f9957d4869f",
	I0526 21:25:59.867833  527485 command_runner.go:124] >       "repoTags": [
	I0526 21:25:59.867842  527485 command_runner.go:124] >         "k8s.gcr.io/kube-scheduler:v1.20.2"
	I0526 21:25:59.867845  527485 command_runner.go:124] >       ],
	I0526 21:25:59.867849  527485 command_runner.go:124] >       "repoDigests": [
	I0526 21:25:59.867857  527485 command_runner.go:124] >         "k8s.gcr.io/kube-scheduler@sha256:304b3d70497bd62498f19f82f9ef164d38948e5ae94966690abfe9d1858867e2"
	I0526 21:25:59.867860  527485 command_runner.go:124] >       ],
	I0526 21:25:59.867864  527485 command_runner.go:124] >       "size": "14012937",
	I0526 21:25:59.867868  527485 command_runner.go:124] >       "uid": {
	I0526 21:25:59.867874  527485 command_runner.go:124] >         "value": "0"
	I0526 21:25:59.867877  527485 command_runner.go:124] >       },
	I0526 21:25:59.867881  527485 command_runner.go:124] >       "username": "",
	I0526 21:25:59.867885  527485 command_runner.go:124] >       "spec": null
	I0526 21:25:59.867888  527485 command_runner.go:124] >     },
	I0526 21:25:59.867892  527485 command_runner.go:124] >     {
	I0526 21:25:59.867898  527485 command_runner.go:124] >       "id": "sha256:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c",
	I0526 21:25:59.867902  527485 command_runner.go:124] >       "repoTags": [
	I0526 21:25:59.867910  527485 command_runner.go:124] >         "k8s.gcr.io/pause:3.2"
	I0526 21:25:59.867913  527485 command_runner.go:124] >       ],
	I0526 21:25:59.867917  527485 command_runner.go:124] >       "repoDigests": [
	I0526 21:25:59.867926  527485 command_runner.go:124] >         "k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f"
	I0526 21:25:59.867929  527485 command_runner.go:124] >       ],
	I0526 21:25:59.867934  527485 command_runner.go:124] >       "size": "299513",
	I0526 21:25:59.867937  527485 command_runner.go:124] >       "uid": null,
	I0526 21:25:59.867941  527485 command_runner.go:124] >       "username": "",
	I0526 21:25:59.867945  527485 command_runner.go:124] >       "spec": null
	I0526 21:25:59.867949  527485 command_runner.go:124] >     }
	I0526 21:25:59.867952  527485 command_runner.go:124] >   ]
	I0526 21:25:59.867955  527485 command_runner.go:124] > }
	I0526 21:25:59.868129  527485 containerd.go:566] couldn't find preloaded image for "docker.io/minikube-local-cache-test:functional-20210526211257-510955". assuming images are not preloaded.
	I0526 21:25:59.868140  527485 cache_images.go:78] LoadImages start: [minikube-local-cache-test:functional-20210526211257-510955]
	I0526 21:25:59.868189  527485 image.go:162] retrieving image: minikube-local-cache-test:functional-20210526211257-510955
	I0526 21:25:59.868201  527485 image.go:168] checking repository: index.docker.io/library/minikube-local-cache-test
	W0526 21:25:59.927015  527485 image.go:175] remote: HEAD https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210526211257-510955: unexpected status code 401 Unauthorized (HEAD responses have no body, use GET for details)
	I0526 21:25:59.927045  527485 image.go:176] short name: minikube-local-cache-test:functional-20210526211257-510955
	I0526 21:25:59.928079  527485 image.go:204] daemon lookup for minikube-local-cache-test:functional-20210526211257-510955: Error response from daemon: reference does not exist
	W0526 21:25:59.966822  527485 image.go:214] authn lookup for minikube-local-cache-test:functional-20210526211257-510955 (trying anon): GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210526211257-510955: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]]
	I0526 21:25:59.994816  527485 command_runner.go:124] > [preflight] Running pre-flight checks
	I0526 21:26:00.009941  527485 image.go:218] remote lookup for minikube-local-cache-test:functional-20210526211257-510955: GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210526211257-510955: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]]
	I0526 21:26:00.009973  527485 image.go:95] error retrieve Image minikube-local-cache-test:functional-20210526211257-510955 ref GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210526211257-510955: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]] 
	I0526 21:26:00.009996  527485 cache_images.go:106] "minikube-local-cache-test:functional-20210526211257-510955" needs transfer: got empty img digest "" for minikube-local-cache-test:functional-20210526211257-510955
	I0526 21:26:00.010026  527485 cri.go:205] Removing image: minikube-local-cache-test:functional-20210526211257-510955
	I0526 21:26:00.010094  527485 ssh_runner.go:149] Run: which crictl
	I0526 21:26:00.310955  527485 command_runner.go:124] > [preflight] Reading configuration from the cluster...
	I0526 21:26:00.310992  527485 command_runner.go:124] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0526 21:26:00.345407  527485 command_runner.go:124] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0526 21:26:00.345844  527485 command_runner.go:124] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0526 21:26:00.345868  527485 command_runner.go:124] > [kubelet-start] Starting the kubelet
	I0526 21:26:00.486355  527485 command_runner.go:124] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0526 21:26:07.017586  527485 command_runner.go:124] > This node has joined the cluster:
	I0526 21:26:07.017615  527485 command_runner.go:124] > * Certificate signing request was sent to apiserver and a response was received.
	I0526 21:26:07.017622  527485 command_runner.go:124] > * The Kubelet was informed of the new secure connection details.
	I0526 21:26:07.017629  527485 command_runner.go:124] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0526 21:26:07.019048  527485 command_runner.go:124] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0526 21:26:07.019107  527485 command_runner.go:124] > /bin/crictl
	I0526 21:26:07.019143  527485 ssh_runner.go:189] Completed: which crictl: (7.009033477s)
	I0526 21:26:07.019205  527485 ssh_runner.go:149] Run: sudo /bin/crictl rmi minikube-local-cache-test:functional-20210526211257-510955
	I0526 21:26:07.019308  527485 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm join control-plane.minikube.internal:8443 --token ch1ot4.9etgzhm4zh9wn897     --discovery-token-ca-cert-hash sha256:12858510f46d14420576d9acdde7779529e8255fb2d74cf18105715622c3cace --ignore-preflight-errors=all --cri-socket /run/containerd/containerd.sock --node-name=multinode-20210526212238-510955-m02": (7.152324551s)
	I0526 21:26:07.019338  527485 ssh_runner.go:149] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0526 21:26:07.298667  527485 command_runner.go:124] > Deleted: docker.io/library/minikube-local-cache-test:functional-20210526211257-510955
	I0526 21:26:07.298756  527485 cache_images.go:279] Loading image from: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/minikube-local-cache-test_functional-20210526211257-510955
	I0526 21:26:07.298800  527485 vm_assets.go:98] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/minikube-local-cache-test_functional-20210526211257-510955 -> /var/lib/minikube/images/minikube-local-cache-test_functional-20210526211257-510955
	I0526 21:26:07.298804  527485 command_runner.go:124] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0526 21:26:07.298834  527485 start.go:226] JoinCluster complete in 7.68039068s
	I0526 21:26:07.298848  527485 cni.go:93] Creating CNI manager for ""
	I0526 21:26:07.298854  527485 cni.go:154] 2 nodes found, recommending kindnet
	I0526 21:26:07.298881  527485 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/minikube-local-cache-test_functional-20210526211257-510955
	I0526 21:26:07.298894  527485 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0526 21:26:07.305231  527485 command_runner.go:124] >   File: /opt/cni/bin/portmap
	I0526 21:26:07.305252  527485 command_runner.go:124] >   Size: 2849304   	Blocks: 5568       IO Block: 4096   regular file
	I0526 21:26:07.305259  527485 command_runner.go:124] > Device: 10h/16d	Inode: 23213       Links: 1
	I0526 21:26:07.305266  527485 command_runner.go:124] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0526 21:26:07.305272  527485 command_runner.go:124] > Access: 2021-05-26 21:22:53.150354389 +0000
	I0526 21:26:07.305278  527485 command_runner.go:124] > Modify: 2021-05-05 21:33:55.000000000 +0000
	I0526 21:26:07.305283  527485 command_runner.go:124] > Change: 2021-05-26 21:22:48.920437741 +0000
	I0526 21:26:07.305286  527485 command_runner.go:124] >  Birth: -
	I0526 21:26:07.305559  527485 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.20.2/kubectl ...
	I0526 21:26:07.305579  527485 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0526 21:26:07.308129  527485 command_runner.go:124] > 5120 2021-05-26 21:15:56.088554954 +0000
	I0526 21:26:07.308647  527485 ssh_runner.go:310] copy: skipping /var/lib/minikube/images/minikube-local-cache-test_functional-20210526211257-510955 (exists)
	I0526 21:26:07.308661  527485 containerd.go:260] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210526211257-510955
	I0526 21:26:07.308705  527485 ssh_runner.go:149] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/minikube-local-cache-test_functional-20210526211257-510955
	I0526 21:26:07.321322  527485 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0526 21:26:07.561843  527485 command_runner.go:124] > unpacking docker.io/library/minikube-local-cache-test:functional-20210526211257-510955 (sha256:d8b8bd0a35bb7de49f0a81841d103dd430b2bd6e4ca4d65facee12d3e0605733)...done
	I0526 21:26:07.563879  527485 cache_images.go:308] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/minikube-local-cache-test_functional-20210526211257-510955 from cache
	I0526 21:26:07.563918  527485 cache_images.go:113] Successfully loaded all cached images
	I0526 21:26:07.563926  527485 cache_images.go:82] LoadImages completed in 7.695778796s
	I0526 21:26:07.564252  527485 main.go:128] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0526 21:26:07.564291  527485 main.go:128] libmachine: Launching plugin server for driver kvm2
	I0526 21:26:07.575574  527485 main.go:128] libmachine: Plugin server listening at address 127.0.0.1:45723
	I0526 21:26:07.576036  527485 main.go:128] libmachine: () Calling .GetVersion
	I0526 21:26:07.576536  527485 main.go:128] libmachine: Using API Version  1
	I0526 21:26:07.576562  527485 main.go:128] libmachine: () Calling .SetConfigRaw
	I0526 21:26:07.576967  527485 main.go:128] libmachine: () Calling .GetMachineName
	I0526 21:26:07.577142  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Calling .GetState
	I0526 21:26:07.580730  527485 main.go:128] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0526 21:26:07.580782  527485 main.go:128] libmachine: Launching plugin server for driver kvm2
	I0526 21:26:07.592963  527485 main.go:128] libmachine: Plugin server listening at address 127.0.0.1:43919
	I0526 21:26:07.593471  527485 main.go:128] libmachine: () Calling .GetVersion
	I0526 21:26:07.594036  527485 main.go:128] libmachine: Using API Version  1
	I0526 21:26:07.594068  527485 main.go:128] libmachine: () Calling .SetConfigRaw
	I0526 21:26:07.594465  527485 main.go:128] libmachine: () Calling .GetMachineName
	I0526 21:26:07.594646  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Calling .DriverName
	I0526 21:26:07.594895  527485 ssh_runner.go:149] Run: sudo crictl images --output json
	I0526 21:26:07.594929  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Calling .GetSSHHostname
	I0526 21:26:07.601623  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | domain multinode-20210526212238-510955-m02 has defined MAC address 52:54:00:9f:f1:a0 in network mk-multinode-20210526212238-510955
	I0526 21:26:07.602019  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:f1:a0", ip: ""} in network mk-multinode-20210526212238-510955: {Iface:virbr1 ExpiryTime:2021-05-26 22:25:34 +0000 UTC Type:0 Mac:52:54:00:9f:f1:a0 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:multinode-20210526212238-510955-m02 Clientid:01:52:54:00:9f:f1:a0}
	I0526 21:26:07.602056  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | domain multinode-20210526212238-510955-m02 has defined IP address 192.168.39.87 and MAC address 52:54:00:9f:f1:a0 in network mk-multinode-20210526212238-510955
	I0526 21:26:07.602144  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Calling .GetSSHPort
	I0526 21:26:07.602316  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Calling .GetSSHKeyPath
	I0526 21:26:07.602462  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Calling .GetSSHUsername
	I0526 21:26:07.602655  527485 sshutil.go:53] new ssh client: &{IP:192.168.39.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/multinode-20210526212238-510955-m02/id_rsa Username:docker}
	I0526 21:26:07.742042  527485 command_runner.go:124] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0526 21:26:07.742085  527485 command_runner.go:124] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0526 21:26:07.742095  527485 command_runner.go:124] > serviceaccount/kindnet unchanged
	I0526 21:26:07.742102  527485 command_runner.go:124] > daemonset.apps/kindnet configured
	I0526 21:26:07.742151  527485 start.go:209] Will wait 6m0s for node &{Name:m02 IP:192.168.39.87 Port:0 KubernetesVersion:v1.20.2 ControlPlane:false Worker:true}
	I0526 21:26:07.742174  527485 command_runner.go:124] > {
	I0526 21:26:07.742192  527485 command_runner.go:124] >   "images": [
	I0526 21:26:07.742199  527485 command_runner.go:124] >     {
	I0526 21:26:07.744012  527485 out.go:170] * Verifying Kubernetes components...
	I0526 21:26:07.742212  527485 command_runner.go:124] >       "id": "sha256:6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb",
	I0526 21:26:07.744108  527485 command_runner.go:124] >       "repoTags": [
	I0526 21:26:07.744122  527485 command_runner.go:124] >         "docker.io/kindest/kindnetd:v20210326-1e038dc5"
	I0526 21:26:07.744127  527485 command_runner.go:124] >       ],
	I0526 21:26:07.744131  527485 command_runner.go:124] >       "repoDigests": [
	I0526 21:26:07.744140  527485 command_runner.go:124] >         "docker.io/kindest/kindnetd@sha256:838bc1706e38391aefaa31fd52619fe8e57ad3dfb0d0ff414d902367fcc24c3c"
	I0526 21:26:07.744157  527485 command_runner.go:124] >       ],
	I0526 21:26:07.744168  527485 command_runner.go:124] >       "size": "53960776",
	I0526 21:26:07.744090  527485 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0526 21:26:07.744175  527485 command_runner.go:124] >       "uid": null,
	I0526 21:26:07.744260  527485 command_runner.go:124] >       "username": "",
	I0526 21:26:07.744268  527485 command_runner.go:124] >       "spec": null
	I0526 21:26:07.744275  527485 command_runner.go:124] >     },
	I0526 21:26:07.744280  527485 command_runner.go:124] >     {
	I0526 21:26:07.744299  527485 command_runner.go:124] >       "id": "sha256:9a07b5b4bfac07e5cfc27f76c34516a3ad2fdfa3f683f375141fe662ef2e72db",
	I0526 21:26:07.744309  527485 command_runner.go:124] >       "repoTags": [
	I0526 21:26:07.744318  527485 command_runner.go:124] >         "docker.io/kubernetesui/dashboard:v2.1.0"
	I0526 21:26:07.744326  527485 command_runner.go:124] >       ],
	I0526 21:26:07.744333  527485 command_runner.go:124] >       "repoDigests": [
	I0526 21:26:07.744347  527485 command_runner.go:124] >         "docker.io/kubernetesui/dashboard@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6"
	I0526 21:26:07.744355  527485 command_runner.go:124] >       ],
	I0526 21:26:07.744362  527485 command_runner.go:124] >       "size": "67992170",
	I0526 21:26:07.744368  527485 command_runner.go:124] >       "uid": null,
	I0526 21:26:07.744379  527485 command_runner.go:124] >       "username": "nonroot",
	I0526 21:26:07.744388  527485 command_runner.go:124] >       "spec": null
	I0526 21:26:07.744393  527485 command_runner.go:124] >     },
	I0526 21:26:07.744398  527485 command_runner.go:124] >     {
	I0526 21:26:07.744409  527485 command_runner.go:124] >       "id": "sha256:86262685d9abb35698a4e03ed13f9ded5b97c6c85b466285e4f367e5232eeee4",
	I0526 21:26:07.744418  527485 command_runner.go:124] >       "repoTags": [
	I0526 21:26:07.744426  527485 command_runner.go:124] >         "docker.io/kubernetesui/metrics-scraper:v1.0.4"
	I0526 21:26:07.744433  527485 command_runner.go:124] >       ],
	I0526 21:26:07.744447  527485 command_runner.go:124] >       "repoDigests": [
	I0526 21:26:07.744462  527485 command_runner.go:124] >         "docker.io/kubernetesui/metrics-scraper@sha256:555981a24f184420f3be0c79d4efb6c948a85cfce84034f85a563f4151a81cbf"
	I0526 21:26:07.744469  527485 command_runner.go:124] >       ],
	I0526 21:26:07.744475  527485 command_runner.go:124] >       "size": "16020077",
	I0526 21:26:07.744481  527485 command_runner.go:124] >       "uid": null,
	I0526 21:26:07.744487  527485 command_runner.go:124] >       "username": "",
	I0526 21:26:07.744494  527485 command_runner.go:124] >       "spec": null
	I0526 21:26:07.744499  527485 command_runner.go:124] >     },
	I0526 21:26:07.744506  527485 command_runner.go:124] >     {
	I0526 21:26:07.744516  527485 command_runner.go:124] >       "id": "sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0526 21:26:07.744525  527485 command_runner.go:124] >       "repoTags": [
	I0526 21:26:07.744533  527485 command_runner.go:124] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0526 21:26:07.744539  527485 command_runner.go:124] >       ],
	I0526 21:26:07.744545  527485 command_runner.go:124] >       "repoDigests": [
	I0526 21:26:07.744559  527485 command_runner.go:124] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I0526 21:26:07.744567  527485 command_runner.go:124] >       ],
	I0526 21:26:07.744573  527485 command_runner.go:124] >       "size": "9058936",
	I0526 21:26:07.744581  527485 command_runner.go:124] >       "uid": null,
	I0526 21:26:07.744587  527485 command_runner.go:124] >       "username": "",
	I0526 21:26:07.744594  527485 command_runner.go:124] >       "spec": null
	I0526 21:26:07.744599  527485 command_runner.go:124] >     },
	I0526 21:26:07.744605  527485 command_runner.go:124] >     {
	I0526 21:26:07.744615  527485 command_runner.go:124] >       "id": "sha256:bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16",
	I0526 21:26:07.744625  527485 command_runner.go:124] >       "repoTags": [
	I0526 21:26:07.744632  527485 command_runner.go:124] >         "k8s.gcr.io/coredns:1.7.0"
	I0526 21:26:07.744640  527485 command_runner.go:124] >       ],
	I0526 21:26:07.744646  527485 command_runner.go:124] >       "repoDigests": [
	I0526 21:26:07.744659  527485 command_runner.go:124] >         "k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c"
	I0526 21:26:07.744667  527485 command_runner.go:124] >       ],
	I0526 21:26:07.744673  527485 command_runner.go:124] >       "size": "13982350",
	I0526 21:26:07.744680  527485 command_runner.go:124] >       "uid": null,
	I0526 21:26:07.744689  527485 command_runner.go:124] >       "username": "",
	I0526 21:26:07.744698  527485 command_runner.go:124] >       "spec": null
	I0526 21:26:07.744703  527485 command_runner.go:124] >     },
	I0526 21:26:07.744716  527485 command_runner.go:124] >     {
	I0526 21:26:07.744727  527485 command_runner.go:124] >       "id": "sha256:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934",
	I0526 21:26:07.744734  527485 command_runner.go:124] >       "repoTags": [
	I0526 21:26:07.744741  527485 command_runner.go:124] >         "k8s.gcr.io/etcd:3.4.13-0"
	I0526 21:26:07.744746  527485 command_runner.go:124] >       ],
	I0526 21:26:07.744756  527485 command_runner.go:124] >       "repoDigests": [
	I0526 21:26:07.744767  527485 command_runner.go:124] >         "k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2"
	I0526 21:26:07.744772  527485 command_runner.go:124] >       ],
	I0526 21:26:07.744779  527485 command_runner.go:124] >       "size": "86742272",
	I0526 21:26:07.744786  527485 command_runner.go:124] >       "uid": null,
	I0526 21:26:07.744793  527485 command_runner.go:124] >       "username": "",
	I0526 21:26:07.744800  527485 command_runner.go:124] >       "spec": null
	I0526 21:26:07.744805  527485 command_runner.go:124] >     },
	I0526 21:26:07.744811  527485 command_runner.go:124] >     {
	I0526 21:26:07.744827  527485 command_runner.go:124] >       "id": "sha256:a8c2fdb8bf76e3b014d14ce69a6a2d11044cb13b4ec3185015c582b8ad69a820",
	I0526 21:26:07.744838  527485 command_runner.go:124] >       "repoTags": [
	I0526 21:26:07.744846  527485 command_runner.go:124] >         "k8s.gcr.io/kube-apiserver:v1.20.2"
	I0526 21:26:07.744851  527485 command_runner.go:124] >       ],
	I0526 21:26:07.744857  527485 command_runner.go:124] >       "repoDigests": [
	I0526 21:26:07.744886  527485 command_runner.go:124] >         "k8s.gcr.io/kube-apiserver@sha256:465ba895d578fbc1c6e299e45689381fd01c54400beba9e8f1d7456077411411"
	I0526 21:26:07.744894  527485 command_runner.go:124] >       ],
	I0526 21:26:07.744901  527485 command_runner.go:124] >       "size": "30411317",
	I0526 21:26:07.744908  527485 command_runner.go:124] >       "uid": {
	I0526 21:26:07.744914  527485 command_runner.go:124] >         "value": "0"
	I0526 21:26:07.744920  527485 command_runner.go:124] >       },
	I0526 21:26:07.744926  527485 command_runner.go:124] >       "username": "",
	I0526 21:26:07.744934  527485 command_runner.go:124] >       "spec": null
	I0526 21:26:07.744939  527485 command_runner.go:124] >     },
	I0526 21:26:07.744946  527485 command_runner.go:124] >     {
	I0526 21:26:07.744959  527485 command_runner.go:124] >       "id": "sha256:a27166429d98e07152ca71420931142127609f715925b1607acee6ea6f0e3696",
	I0526 21:26:07.744965  527485 command_runner.go:124] >       "repoTags": [
	I0526 21:26:07.744976  527485 command_runner.go:124] >         "k8s.gcr.io/kube-controller-manager:v1.20.2"
	I0526 21:26:07.744981  527485 command_runner.go:124] >       ],
	I0526 21:26:07.744988  527485 command_runner.go:124] >       "repoDigests": [
	I0526 21:26:07.744999  527485 command_runner.go:124] >         "k8s.gcr.io/kube-controller-manager@sha256:842a071d4ad49b0018f7f7404ac8a4ddfc2bce2ce15b3f8131d89563fda36c9b"
	I0526 21:26:07.745006  527485 command_runner.go:124] >       ],
	I0526 21:26:07.745013  527485 command_runner.go:124] >       "size": "29362302",
	I0526 21:26:07.745019  527485 command_runner.go:124] >       "uid": {
	I0526 21:26:07.745024  527485 command_runner.go:124] >         "value": "0"
	I0526 21:26:07.745031  527485 command_runner.go:124] >       },
	I0526 21:26:07.745037  527485 command_runner.go:124] >       "username": "",
	I0526 21:26:07.745045  527485 command_runner.go:124] >       "spec": null
	I0526 21:26:07.745050  527485 command_runner.go:124] >     },
	I0526 21:26:07.745057  527485 command_runner.go:124] >     {
	I0526 21:26:07.745067  527485 command_runner.go:124] >       "id": "sha256:43154ddb57a83de3068fe603e9c7393e7d2b77cb18d9e0daf869f74b1b4079c0",
	I0526 21:26:07.745076  527485 command_runner.go:124] >       "repoTags": [
	I0526 21:26:07.745083  527485 command_runner.go:124] >         "k8s.gcr.io/kube-proxy:v1.20.2"
	I0526 21:26:07.745091  527485 command_runner.go:124] >       ],
	I0526 21:26:07.745098  527485 command_runner.go:124] >       "repoDigests": [
	I0526 21:26:07.745111  527485 command_runner.go:124] >         "k8s.gcr.io/kube-proxy@sha256:326fe8a4508a5db91cf234c4867eff5ba458bc4107c2a7e15c827a74faa19be9"
	I0526 21:26:07.745118  527485 command_runner.go:124] >       ],
	I0526 21:26:07.745124  527485 command_runner.go:124] >       "size": "49539606",
	I0526 21:26:07.745132  527485 command_runner.go:124] >       "uid": null,
	I0526 21:26:07.745137  527485 command_runner.go:124] >       "username": "",
	I0526 21:26:07.745144  527485 command_runner.go:124] >       "spec": null
	I0526 21:26:07.745149  527485 command_runner.go:124] >     },
	I0526 21:26:07.745157  527485 command_runner.go:124] >     {
	I0526 21:26:07.745167  527485 command_runner.go:124] >       "id": "sha256:ed2c44fbdd78b69a0981ab3c57ebce2798e4a4b2b5dda2fabc720f9957d4869f",
	I0526 21:26:07.745177  527485 command_runner.go:124] >       "repoTags": [
	I0526 21:26:07.745185  527485 command_runner.go:124] >         "k8s.gcr.io/kube-scheduler:v1.20.2"
	I0526 21:26:07.745193  527485 command_runner.go:124] >       ],
	I0526 21:26:07.745199  527485 command_runner.go:124] >       "repoDigests": [
	I0526 21:26:07.745210  527485 command_runner.go:124] >         "k8s.gcr.io/kube-scheduler@sha256:304b3d70497bd62498f19f82f9ef164d38948e5ae94966690abfe9d1858867e2"
	I0526 21:26:07.745218  527485 command_runner.go:124] >       ],
	I0526 21:26:07.745224  527485 command_runner.go:124] >       "size": "14012937",
	I0526 21:26:07.745232  527485 command_runner.go:124] >       "uid": {
	I0526 21:26:07.745238  527485 command_runner.go:124] >         "value": "0"
	I0526 21:26:07.745245  527485 command_runner.go:124] >       },
	I0526 21:26:07.745256  527485 command_runner.go:124] >       "username": "",
	I0526 21:26:07.745265  527485 command_runner.go:124] >       "spec": null
	I0526 21:26:07.745270  527485 command_runner.go:124] >     },
	I0526 21:26:07.745277  527485 command_runner.go:124] >     {
	I0526 21:26:07.745294  527485 command_runner.go:124] >       "id": "sha256:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c",
	I0526 21:26:07.745303  527485 command_runner.go:124] >       "repoTags": [
	I0526 21:26:07.745309  527485 command_runner.go:124] >         "k8s.gcr.io/pause:3.2"
	I0526 21:26:07.745319  527485 command_runner.go:124] >       ],
	I0526 21:26:07.745326  527485 command_runner.go:124] >       "repoDigests": [
	I0526 21:26:07.745338  527485 command_runner.go:124] >         "k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f"
	I0526 21:26:07.745344  527485 command_runner.go:124] >       ],
	I0526 21:26:07.745350  527485 command_runner.go:124] >       "size": "299513",
	I0526 21:26:07.745356  527485 command_runner.go:124] >       "uid": null,
	I0526 21:26:07.745364  527485 command_runner.go:124] >       "username": "",
	I0526 21:26:07.745370  527485 command_runner.go:124] >       "spec": null
	I0526 21:26:07.745374  527485 command_runner.go:124] >     }
	I0526 21:26:07.745379  527485 command_runner.go:124] >   ]
	I0526 21:26:07.745383  527485 command_runner.go:124] > }
	I0526 21:26:07.745562  527485 containerd.go:566] couldn't find preloaded image for "docker.io/minikube-local-cache-test:functional-20210526211257-510955". assuming images are not preloaded.
	I0526 21:26:07.745581  527485 cache_images.go:78] LoadImages start: [minikube-local-cache-test:functional-20210526211257-510955]
	I0526 21:26:07.745632  527485 image.go:162] retrieving image: minikube-local-cache-test:functional-20210526211257-510955
	I0526 21:26:07.745650  527485 image.go:168] checking repository: index.docker.io/library/minikube-local-cache-test
	I0526 21:26:07.770156  527485 loader.go:379] Config loaded from file:  /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/kubeconfig
	I0526 21:26:07.770880  527485 kapi.go:59] client config for multinode-20210526212238-510955: &rest.Config{Host:"https://192.168.39.229:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/multinode-20210526212238-510955/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/multinode-2
0210526212238-510955/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16ac600), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0526 21:26:07.773470  527485 node_ready.go:35] waiting up to 6m0s for node "multinode-20210526212238-510955-m02" to be "Ready" ...
	I0526 21:26:07.773560  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955-m02
	I0526 21:26:07.773573  527485 round_trippers.go:429] Request Headers:
	I0526 21:26:07.773580  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:26:07.773589  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:26:07.780507  527485 round_trippers.go:448] Response Status: 200 OK in 6 milliseconds
	I0526 21:26:07.780522  527485 round_trippers.go:451] Response Headers:
	I0526 21:26:07.780527  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:26:07.780532  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:26:07.780536  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:26:07.780540  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:26:07.780544  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:26:07 GMT
	I0526 21:26:07.781445  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955-m02","uid":"67a94a89-b2ce-441e-8b6d-a1198d6b46b1","resourceVersion":"617","creationTimestamp":"2021-05-26T21:26:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:26:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","t
ime":"2021-05-26T21:26:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta [truncated 5546 chars]
	W0526 21:26:07.798802  527485 image.go:175] remote: HEAD https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210526211257-510955: unexpected status code 401 Unauthorized (HEAD responses have no body, use GET for details)
	I0526 21:26:07.798833  527485 image.go:176] short name: minikube-local-cache-test:functional-20210526211257-510955
	I0526 21:26:07.802331  527485 image.go:204] daemon lookup for minikube-local-cache-test:functional-20210526211257-510955: Error response from daemon: reference does not exist
	W0526 21:26:07.846923  527485 image.go:214] authn lookup for minikube-local-cache-test:functional-20210526211257-510955 (trying anon): GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210526211257-510955: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]]
	I0526 21:26:07.894841  527485 image.go:218] remote lookup for minikube-local-cache-test:functional-20210526211257-510955: GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210526211257-510955: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]]
	I0526 21:26:07.894894  527485 image.go:95] error retrieve Image minikube-local-cache-test:functional-20210526211257-510955 ref GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210526211257-510955: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]] 
	I0526 21:26:07.894924  527485 cache_images.go:106] "minikube-local-cache-test:functional-20210526211257-510955" needs transfer: got empty img digest "" for minikube-local-cache-test:functional-20210526211257-510955
	I0526 21:26:07.894963  527485 cri.go:205] Removing image: minikube-local-cache-test:functional-20210526211257-510955
	I0526 21:26:07.895010  527485 ssh_runner.go:149] Run: which crictl
	I0526 21:26:07.900977  527485 command_runner.go:124] > /bin/crictl
	I0526 21:26:07.901235  527485 ssh_runner.go:149] Run: sudo /bin/crictl rmi minikube-local-cache-test:functional-20210526211257-510955
	I0526 21:26:07.925276  527485 command_runner.go:124] ! time="2021-05-26T21:26:07Z" level=error msg="no such image minikube-local-cache-test:functional-20210526211257-510955"
	I0526 21:26:07.925308  527485 command_runner.go:124] ! time="2021-05-26T21:26:07Z" level=fatal msg="unable to remove the image(s)"
	I0526 21:26:07.925350  527485 cache_images.go:279] Loading image from: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/minikube-local-cache-test_functional-20210526211257-510955
	I0526 21:26:07.925385  527485 vm_assets.go:98] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/minikube-local-cache-test_functional-20210526211257-510955 -> /var/lib/minikube/images/minikube-local-cache-test_functional-20210526211257-510955
	I0526 21:26:07.925452  527485 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/minikube-local-cache-test_functional-20210526211257-510955
	I0526 21:26:07.938254  527485 command_runner.go:124] ! stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210526211257-510955': No such file or directory
	I0526 21:26:07.938651  527485 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210526211257-510955: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/minikube-local-cache-test_functional-20210526211257-510955: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210526211257-510955': No such file or directory
	I0526 21:26:07.938689  527485 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/minikube-local-cache-test_functional-20210526211257-510955 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210526211257-510955 (5120 bytes)
	I0526 21:26:07.958227  527485 containerd.go:260] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210526211257-510955
	I0526 21:26:07.958318  527485 ssh_runner.go:149] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/minikube-local-cache-test_functional-20210526211257-510955
	I0526 21:26:08.222264  527485 command_runner.go:124] > unpacking docker.io/library/minikube-local-cache-test:functional-20210526211257-510955 (sha256:d8b8bd0a35bb7de49f0a81841d103dd430b2bd6e4ca4d65facee12d3e0605733)...done
	I0526 21:26:08.225698  527485 cache_images.go:308] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/minikube-local-cache-test_functional-20210526211257-510955 from cache
	I0526 21:26:08.225738  527485 cache_images.go:113] Successfully loaded all cached images
	I0526 21:26:08.225749  527485 cache_images.go:82] LoadImages completed in 480.158269ms
	I0526 21:26:08.225768  527485 cache_images.go:252] succeeded pushing to: multinode-20210526212238-510955 multinode-20210526212238-510955-m02
	I0526 21:26:08.225775  527485 cache_images.go:253] failed pushing to: 
	I0526 21:26:08.225807  527485 main.go:128] libmachine: Making call to close driver server
	I0526 21:26:08.225824  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .Close
	I0526 21:26:08.226096  527485 main.go:128] libmachine: Successfully made call to close driver server
	I0526 21:26:08.226140  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | Closing plugin on server side
	I0526 21:26:08.226143  527485 main.go:128] libmachine: Making call to close connection to plugin binary
	I0526 21:26:08.226208  527485 main.go:128] libmachine: Making call to close driver server
	I0526 21:26:08.226218  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .Close
	I0526 21:26:08.226430  527485 main.go:128] libmachine: Successfully made call to close driver server
	I0526 21:26:08.226473  527485 main.go:128] libmachine: Making call to close connection to plugin binary
	I0526 21:26:08.226488  527485 main.go:128] libmachine: Making call to close driver server
	I0526 21:26:08.226499  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Calling .Close
	I0526 21:26:08.226457  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | Closing plugin on server side
	I0526 21:26:08.227680  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | Closing plugin on server side
	I0526 21:26:08.227720  527485 main.go:128] libmachine: Successfully made call to close driver server
	I0526 21:26:08.227733  527485 main.go:128] libmachine: Making call to close connection to plugin binary
	I0526 21:26:08.227744  527485 main.go:128] libmachine: Making call to close driver server
	I0526 21:26:08.227756  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Calling .Close
	I0526 21:26:08.227952  527485 main.go:128] libmachine: Successfully made call to close driver server
	I0526 21:26:08.227964  527485 main.go:128] libmachine: Making call to close connection to plugin binary
	I0526 21:26:08.282181  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955-m02
	I0526 21:26:08.282200  527485 round_trippers.go:429] Request Headers:
	I0526 21:26:08.282205  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:26:08.282213  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:26:08.284910  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:26:08.284928  527485 round_trippers.go:451] Response Headers:
	I0526 21:26:08.284935  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:26:08.284941  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:26:08.284945  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:26:08 GMT
	I0526 21:26:08.284949  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:26:08.284954  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:26:08.285156  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955-m02","uid":"67a94a89-b2ce-441e-8b6d-a1198d6b46b1","resourceVersion":"617","creationTimestamp":"2021-05-26T21:26:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:26:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","t
ime":"2021-05-26T21:26:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta [truncated 5546 chars]
	I0526 21:26:08.782297  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955-m02
	I0526 21:26:08.782321  527485 round_trippers.go:429] Request Headers:
	I0526 21:26:08.782327  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:26:08.782330  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:26:08.785128  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:26:08.785141  527485 round_trippers.go:451] Response Headers:
	I0526 21:26:08.785145  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:26:08.785148  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:26:08.785153  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:26:08.785156  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:26:08 GMT
	I0526 21:26:08.785163  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:26:08.785383  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955-m02","uid":"67a94a89-b2ce-441e-8b6d-a1198d6b46b1","resourceVersion":"617","creationTimestamp":"2021-05-26T21:26:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:26:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","t
ime":"2021-05-26T21:26:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta [truncated 5546 chars]
	I0526 21:26:09.282567  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955-m02
	I0526 21:26:09.282601  527485 round_trippers.go:429] Request Headers:
	I0526 21:26:09.282609  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:26:09.282616  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:26:09.285756  527485 round_trippers.go:448] Response Status: 200 OK in 3 milliseconds
	I0526 21:26:09.285781  527485 round_trippers.go:451] Response Headers:
	I0526 21:26:09.285787  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:26:09.285793  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:26:09.285798  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:26:09.285803  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:26:09.285807  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:26:09 GMT
	I0526 21:26:09.286343  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955-m02","uid":"67a94a89-b2ce-441e-8b6d-a1198d6b46b1","resourceVersion":"617","creationTimestamp":"2021-05-26T21:26:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:26:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","t
ime":"2021-05-26T21:26:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta [truncated 5546 chars]
	I0526 21:26:09.782429  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955-m02
	I0526 21:26:09.782459  527485 round_trippers.go:429] Request Headers:
	I0526 21:26:09.782467  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:26:09.782471  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:26:09.785669  527485 round_trippers.go:448] Response Status: 200 OK in 3 milliseconds
	I0526 21:26:09.785690  527485 round_trippers.go:451] Response Headers:
	I0526 21:26:09.785694  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:26:09.785697  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:26:09.785700  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:26:09.785703  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:26:09.785706  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:26:09 GMT
	I0526 21:26:09.785974  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955-m02","uid":"67a94a89-b2ce-441e-8b6d-a1198d6b46b1","resourceVersion":"617","creationTimestamp":"2021-05-26T21:26:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:26:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","t
ime":"2021-05-26T21:26:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta [truncated 5546 chars]
	I0526 21:26:09.786210  527485 node_ready.go:58] node "multinode-20210526212238-510955-m02" has status "Ready":"False"
	I0526 21:26:10.282294  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955-m02
	I0526 21:26:10.282323  527485 round_trippers.go:429] Request Headers:
	I0526 21:26:10.282328  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:26:10.282332  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:26:10.287758  527485 round_trippers.go:448] Response Status: 200 OK in 5 milliseconds
	I0526 21:26:10.287777  527485 round_trippers.go:451] Response Headers:
	I0526 21:26:10.287783  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:26:10.287788  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:26:10 GMT
	I0526 21:26:10.287792  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:26:10.287796  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:26:10.287800  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:26:10.288064  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955-m02","uid":"67a94a89-b2ce-441e-8b6d-a1198d6b46b1","resourceVersion":"617","creationTimestamp":"2021-05-26T21:26:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:26:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","t
ime":"2021-05-26T21:26:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta [truncated 5546 chars]
	I0526 21:26:10.781956  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955-m02
	I0526 21:26:10.781982  527485 round_trippers.go:429] Request Headers:
	I0526 21:26:10.781987  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:26:10.781992  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:26:10.785443  527485 round_trippers.go:448] Response Status: 200 OK in 3 milliseconds
	I0526 21:26:10.785462  527485 round_trippers.go:451] Response Headers:
	I0526 21:26:10.785467  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:26:10.785473  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:26:10 GMT
	I0526 21:26:10.785477  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:26:10.785481  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:26:10.785485  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:26:10.785778  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955-m02","uid":"67a94a89-b2ce-441e-8b6d-a1198d6b46b1","resourceVersion":"617","creationTimestamp":"2021-05-26T21:26:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:26:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","t
ime":"2021-05-26T21:26:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta [truncated 5546 chars]
	I0526 21:26:11.282455  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955-m02
	I0526 21:26:11.282481  527485 round_trippers.go:429] Request Headers:
	I0526 21:26:11.282486  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:26:11.282490  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:26:11.285437  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:26:11.285458  527485 round_trippers.go:451] Response Headers:
	I0526 21:26:11.285465  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:26:11.285470  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:26:11.285472  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:26:11.285475  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:26:11.285478  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:26:11 GMT
	I0526 21:26:11.286729  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955-m02","uid":"67a94a89-b2ce-441e-8b6d-a1198d6b46b1","resourceVersion":"617","creationTimestamp":"2021-05-26T21:26:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:26:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","t
ime":"2021-05-26T21:26:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta [truncated 5546 chars]
	I0526 21:26:11.782509  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955-m02
	I0526 21:26:11.782536  527485 round_trippers.go:429] Request Headers:
	I0526 21:26:11.782541  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:26:11.782545  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:26:11.785153  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:26:11.785171  527485 round_trippers.go:451] Response Headers:
	I0526 21:26:11.785175  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:26:11.785179  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:26:11.785181  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:26:11.785184  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:26:11.785187  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:26:11 GMT
	I0526 21:26:11.785305  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955-m02","uid":"67a94a89-b2ce-441e-8b6d-a1198d6b46b1","resourceVersion":"617","creationTimestamp":"2021-05-26T21:26:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:26:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","t
ime":"2021-05-26T21:26:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta [truncated 5546 chars]
	I0526 21:26:12.282135  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955-m02
	I0526 21:26:12.282160  527485 round_trippers.go:429] Request Headers:
	I0526 21:26:12.282166  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:26:12.282170  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:26:12.284771  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:26:12.284787  527485 round_trippers.go:451] Response Headers:
	I0526 21:26:12.284798  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:26:12.284801  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:26:12.284806  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:26:12.284809  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:26:12.284882  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:26:12 GMT
	I0526 21:26:12.284996  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955-m02","uid":"67a94a89-b2ce-441e-8b6d-a1198d6b46b1","resourceVersion":"617","creationTimestamp":"2021-05-26T21:26:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:26:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","t
ime":"2021-05-26T21:26:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta [truncated 5546 chars]
	I0526 21:26:12.285281  527485 node_ready.go:58] node "multinode-20210526212238-510955-m02" has status "Ready":"False"
	I0526 21:26:12.782057  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955-m02
	I0526 21:26:12.782080  527485 round_trippers.go:429] Request Headers:
	I0526 21:26:12.782085  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:26:12.782089  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:26:12.784478  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:26:12.784501  527485 round_trippers.go:451] Response Headers:
	I0526 21:26:12.784507  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:26:12 GMT
	I0526 21:26:12.784514  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:26:12.784518  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:26:12.784527  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:26:12.784532  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:26:12.784737  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955-m02","uid":"67a94a89-b2ce-441e-8b6d-a1198d6b46b1","resourceVersion":"617","creationTimestamp":"2021-05-26T21:26:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:26:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","t
ime":"2021-05-26T21:26:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta [truncated 5546 chars]
	I0526 21:26:13.282533  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955-m02
	I0526 21:26:13.282565  527485 round_trippers.go:429] Request Headers:
	I0526 21:26:13.282576  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:26:13.282582  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:26:13.286379  527485 round_trippers.go:448] Response Status: 200 OK in 3 milliseconds
	I0526 21:26:13.286395  527485 round_trippers.go:451] Response Headers:
	I0526 21:26:13.286399  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:26:13.286403  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:26:13 GMT
	I0526 21:26:13.286406  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:26:13.286408  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:26:13.286411  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:26:13.286978  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955-m02","uid":"67a94a89-b2ce-441e-8b6d-a1198d6b46b1","resourceVersion":"617","creationTimestamp":"2021-05-26T21:26:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:26:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","t
ime":"2021-05-26T21:26:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta [truncated 5546 chars]
	I0526 21:26:13.782029  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955-m02
	I0526 21:26:13.782051  527485 round_trippers.go:429] Request Headers:
	I0526 21:26:13.782057  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:26:13.782061  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:26:13.785869  527485 round_trippers.go:448] Response Status: 200 OK in 3 milliseconds
	I0526 21:26:13.785888  527485 round_trippers.go:451] Response Headers:
	I0526 21:26:13.785893  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:26:13.785896  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:26:13.785899  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:26:13.785902  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:26:13.785905  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:26:13 GMT
	I0526 21:26:13.786356  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955-m02","uid":"67a94a89-b2ce-441e-8b6d-a1198d6b46b1","resourceVersion":"617","creationTimestamp":"2021-05-26T21:26:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:26:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","t
ime":"2021-05-26T21:26:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta [truncated 5546 chars]
	I0526 21:26:14.282505  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955-m02
	I0526 21:26:14.282530  527485 round_trippers.go:429] Request Headers:
	I0526 21:26:14.282536  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:26:14.282540  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:26:14.285358  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:26:14.285374  527485 round_trippers.go:451] Response Headers:
	I0526 21:26:14.285378  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:26:14.285381  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:26:14.285384  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:26:14.285387  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:26:14.285390  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:26:14 GMT
	I0526 21:26:14.285904  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955-m02","uid":"67a94a89-b2ce-441e-8b6d-a1198d6b46b1","resourceVersion":"617","creationTimestamp":"2021-05-26T21:26:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:26:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","t
ime":"2021-05-26T21:26:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta [truncated 5546 chars]
	I0526 21:26:14.286147  527485 node_ready.go:58] node "multinode-20210526212238-510955-m02" has status "Ready":"False"
	I0526 21:26:14.782213  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955-m02
	I0526 21:26:14.782251  527485 round_trippers.go:429] Request Headers:
	I0526 21:26:14.782264  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:26:14.782275  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:26:14.785317  527485 round_trippers.go:448] Response Status: 200 OK in 3 milliseconds
	I0526 21:26:14.785338  527485 round_trippers.go:451] Response Headers:
	I0526 21:26:14.785343  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:26:14 GMT
	I0526 21:26:14.785348  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:26:14.785352  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:26:14.785357  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:26:14.785360  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:26:14.785780  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955-m02","uid":"67a94a89-b2ce-441e-8b6d-a1198d6b46b1","resourceVersion":"617","creationTimestamp":"2021-05-26T21:26:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:26:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","t
ime":"2021-05-26T21:26:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta [truncated 5546 chars]
	I0526 21:26:15.281994  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955-m02
	I0526 21:26:15.282021  527485 round_trippers.go:429] Request Headers:
	I0526 21:26:15.282026  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:26:15.282030  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:26:15.284857  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:26:15.284896  527485 round_trippers.go:451] Response Headers:
	I0526 21:26:15.284901  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:26:15.284906  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:26:15.284910  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:26:15.284914  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:26:15.284918  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:26:15 GMT
	I0526 21:26:15.285547  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955-m02","uid":"67a94a89-b2ce-441e-8b6d-a1198d6b46b1","resourceVersion":"617","creationTimestamp":"2021-05-26T21:26:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:26:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","t
ime":"2021-05-26T21:26:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta [truncated 5546 chars]
	I0526 21:26:15.782602  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955-m02
	I0526 21:26:15.782662  527485 round_trippers.go:429] Request Headers:
	I0526 21:26:15.782681  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:26:15.782697  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:26:15.785867  527485 round_trippers.go:448] Response Status: 200 OK in 3 milliseconds
	I0526 21:26:15.785881  527485 round_trippers.go:451] Response Headers:
	I0526 21:26:15.785886  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:26:15.785891  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:26:15.785900  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:26:15.785905  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:26:15.785910  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:26:15 GMT
	I0526 21:26:15.786228  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955-m02","uid":"67a94a89-b2ce-441e-8b6d-a1198d6b46b1","resourceVersion":"617","creationTimestamp":"2021-05-26T21:26:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:26:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","t
ime":"2021-05-26T21:26:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta [truncated 5546 chars]
	I0526 21:26:16.282045  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955-m02
	I0526 21:26:16.282068  527485 round_trippers.go:429] Request Headers:
	I0526 21:26:16.282074  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:26:16.282078  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:26:16.284876  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:26:16.284896  527485 round_trippers.go:451] Response Headers:
	I0526 21:26:16.284901  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:26:16.284905  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:26:16.284910  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:26:16.284914  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:26:16.284919  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:26:16 GMT
	I0526 21:26:16.285029  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955-m02","uid":"67a94a89-b2ce-441e-8b6d-a1198d6b46b1","resourceVersion":"617","creationTimestamp":"2021-05-26T21:26:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:26:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","t
ime":"2021-05-26T21:26:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta [truncated 5546 chars]
	I0526 21:26:16.782674  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955-m02
	I0526 21:26:16.782700  527485 round_trippers.go:429] Request Headers:
	I0526 21:26:16.782705  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:26:16.782709  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:26:16.787282  527485 round_trippers.go:448] Response Status: 200 OK in 4 milliseconds
	I0526 21:26:16.787296  527485 round_trippers.go:451] Response Headers:
	I0526 21:26:16.787302  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:26:16.787306  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:26:16.787311  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:26:16.787316  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:26:16.787320  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:26:16 GMT
	I0526 21:26:16.787682  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955-m02","uid":"67a94a89-b2ce-441e-8b6d-a1198d6b46b1","resourceVersion":"643","creationTimestamp":"2021-05-26T21:26:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:26:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","t
ime":"2021-05-26T21:26:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta [truncated 5328 chars]
	I0526 21:26:16.788003  527485 node_ready.go:49] node "multinode-20210526212238-510955-m02" has status "Ready":"True"
	I0526 21:26:16.788033  527485 node_ready.go:38] duration metric: took 9.014539124s waiting for node "multinode-20210526212238-510955-m02" to be "Ready" ...
	I0526 21:26:16.788053  527485 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0526 21:26:16.788131  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods
	I0526 21:26:16.788143  527485 round_trippers.go:429] Request Headers:
	I0526 21:26:16.788150  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:26:16.788155  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:26:16.791856  527485 round_trippers.go:448] Response Status: 200 OK in 3 milliseconds
	I0526 21:26:16.791870  527485 round_trippers.go:451] Response Headers:
	I0526 21:26:16.791873  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:26:16 GMT
	I0526 21:26:16.791876  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:26:16.791879  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:26:16.791882  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:26:16.791884  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:26:16.794499  527485 request.go:1107] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"643"},"items":[{"metadata":{"name":"coredns-74ff55c5b-tw67b","generateName":"coredns-74ff55c5b-","namespace":"kube-system","uid":"a0522c32-9960-4c21-8a5a-d0b137009166","resourceVersion":"500","creationTimestamp":"2021-05-26T21:23:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"74ff55c5b"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-74ff55c5b","uid":"605a8716-bb95-4dc9-a518-ec6d2a3d080e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"605a8716-bb95-4dc9-a518-ec6d2a3d080e\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},
"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"n [truncated 66009 chars]
	I0526 21:26:16.796026  527485 pod_ready.go:78] waiting up to 6m0s for pod "coredns-74ff55c5b-tw67b" in "kube-system" namespace to be "Ready" ...
	I0526 21:26:16.796089  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/coredns-74ff55c5b-tw67b
	I0526 21:26:16.796100  527485 round_trippers.go:429] Request Headers:
	I0526 21:26:16.796106  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:26:16.796110  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:26:16.798364  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:26:16.798377  527485 round_trippers.go:451] Response Headers:
	I0526 21:26:16.798381  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:26:16.798385  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:26:16.798387  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:26:16.798391  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:26:16.798396  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:26:16 GMT
	I0526 21:26:16.798787  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-74ff55c5b-tw67b","generateName":"coredns-74ff55c5b-","namespace":"kube-system","uid":"a0522c32-9960-4c21-8a5a-d0b137009166","resourceVersion":"500","creationTimestamp":"2021-05-26T21:23:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"74ff55c5b"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-74ff55c5b","uid":"605a8716-bb95-4dc9-a518-ec6d2a3d080e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"605a8716-bb95-4dc9-a518-ec6d2a3d080e\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:
containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{ [truncated 5780 chars]
	I0526 21:26:16.799179  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:26:16.799196  527485 round_trippers.go:429] Request Headers:
	I0526 21:26:16.799202  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:26:16.799208  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:26:16.801151  527485 round_trippers.go:448] Response Status: 200 OK in 1 milliseconds
	I0526 21:26:16.801169  527485 round_trippers.go:451] Response Headers:
	I0526 21:26:16.801175  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:26:16 GMT
	I0526 21:26:16.801182  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:26:16.801193  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:26:16.801198  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:26:16.801207  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:26:16.801323  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:26:16.801577  527485 pod_ready.go:92] pod "coredns-74ff55c5b-tw67b" in "kube-system" namespace has status "Ready":"True"
	I0526 21:26:16.801590  527485 pod_ready.go:81] duration metric: took 5.537684ms waiting for pod "coredns-74ff55c5b-tw67b" in "kube-system" namespace to be "Ready" ...
	I0526 21:26:16.801598  527485 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-20210526212238-510955" in "kube-system" namespace to be "Ready" ...
	I0526 21:26:16.801646  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:26:16.801657  527485 round_trippers.go:429] Request Headers:
	I0526 21:26:16.801663  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:26:16.801669  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:26:16.804138  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:26:16.804148  527485 round_trippers.go:451] Response Headers:
	I0526 21:26:16.804155  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:26:16.804160  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:26:16.804166  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:26:16.804171  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:26:16.804175  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:26:16 GMT
	I0526 21:26:16.804609  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"539","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:25:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5642 chars]
	I0526 21:26:16.804940  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:26:16.804955  527485 round_trippers.go:429] Request Headers:
	I0526 21:26:16.804961  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:26:16.804967  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:26:16.807074  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:26:16.807112  527485 round_trippers.go:451] Response Headers:
	I0526 21:26:16.807123  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:26:16.807127  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:26:16.807132  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:26:16.807137  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:26:16.807142  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:26:16 GMT
	I0526 21:26:16.807917  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:26:16.808139  527485 pod_ready.go:92] pod "etcd-multinode-20210526212238-510955" in "kube-system" namespace has status "Ready":"True"
	I0526 21:26:16.808152  527485 pod_ready.go:81] duration metric: took 6.548202ms waiting for pod "etcd-multinode-20210526212238-510955" in "kube-system" namespace to be "Ready" ...
	I0526 21:26:16.808170  527485 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-20210526212238-510955" in "kube-system" namespace to be "Ready" ...
	I0526 21:26:16.808219  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-20210526212238-510955
	I0526 21:26:16.808228  527485 round_trippers.go:429] Request Headers:
	I0526 21:26:16.808235  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:26:16.808242  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:26:16.810336  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:26:16.810352  527485 round_trippers.go:451] Response Headers:
	I0526 21:26:16.810357  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:26:16 GMT
	I0526 21:26:16.810361  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:26:16.810365  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:26:16.810370  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:26:16.810374  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:26:16.810791  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-20210526212238-510955","namespace":"kube-system","uid":"5d446255-3487-4319-9b9f-2294a93fd226","resourceVersion":"447","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.229:8443","kubernetes.io/config.hash":"b42b6879229f245abab6047de8662a2f","kubernetes.io/config.mirror":"b42b6879229f245abab6047de8662a2f","kubernetes.io/config.seen":"2021-05-26T21:23:43.638984722Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:kubeadm.kubernetes.io/kube-apiserver.advertise-addr [truncated 7266 chars]
	I0526 21:26:16.811070  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:26:16.811083  527485 round_trippers.go:429] Request Headers:
	I0526 21:26:16.811092  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:26:16.811098  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:26:16.813937  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:26:16.813950  527485 round_trippers.go:451] Response Headers:
	I0526 21:26:16.813955  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:26:16.813959  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:26:16.813963  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:26:16.813968  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:26:16 GMT
	I0526 21:26:16.813973  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:26:16.814281  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:26:16.814488  527485 pod_ready.go:92] pod "kube-apiserver-multinode-20210526212238-510955" in "kube-system" namespace has status "Ready":"True"
	I0526 21:26:16.814499  527485 pod_ready.go:81] duration metric: took 6.318765ms waiting for pod "kube-apiserver-multinode-20210526212238-510955" in "kube-system" namespace to be "Ready" ...
	I0526 21:26:16.814510  527485 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-20210526212238-510955" in "kube-system" namespace to be "Ready" ...
	I0526 21:26:16.814550  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20210526212238-510955
	I0526 21:26:16.814560  527485 round_trippers.go:429] Request Headers:
	I0526 21:26:16.814566  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:26:16.814572  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:26:16.818018  527485 round_trippers.go:448] Response Status: 200 OK in 3 milliseconds
	I0526 21:26:16.818030  527485 round_trippers.go:451] Response Headers:
	I0526 21:26:16.818034  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:26:16.818037  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:26:16.818040  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:26:16 GMT
	I0526 21:26:16.818043  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:26:16.818047  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:26:16.818941  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-20210526212238-510955","namespace":"kube-system","uid":"ff663293-6f11-48e7-9409-1637114dc587","resourceVersion":"546","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"474c55dfb64741cc485e46b6bb9f2dc0","kubernetes.io/config.mirror":"474c55dfb64741cc485e46b6bb9f2dc0","kubernetes.io/config.seen":"2021-05-26T21:23:43.638987620Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:25:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con
fig.mirror":{},"f:kubernetes.io/config.seen":{},"f:kubernetes.io/config [truncated 6822 chars]
	I0526 21:26:16.819293  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:26:16.819317  527485 round_trippers.go:429] Request Headers:
	I0526 21:26:16.819325  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:26:16.819333  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:26:16.822487  527485 round_trippers.go:448] Response Status: 200 OK in 3 milliseconds
	I0526 21:26:16.822498  527485 round_trippers.go:451] Response Headers:
	I0526 21:26:16.822503  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:26:16 GMT
	I0526 21:26:16.822507  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:26:16.822511  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:26:16.822516  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:26:16.822521  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:26:16.823589  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:26:16.823790  527485 pod_ready.go:92] pod "kube-controller-manager-multinode-20210526212238-510955" in "kube-system" namespace has status "Ready":"True"
	I0526 21:26:16.823802  527485 pod_ready.go:81] duration metric: took 9.28412ms waiting for pod "kube-controller-manager-multinode-20210526212238-510955" in "kube-system" namespace to be "Ready" ...
	I0526 21:26:16.823812  527485 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-q7l2f" in "kube-system" namespace to be "Ready" ...
	I0526 21:26:16.983206  527485 request.go:591] Throttling request took 159.360803ms, request: GET:https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/kube-proxy-q7l2f
	I0526 21:26:16.983247  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/kube-proxy-q7l2f
	I0526 21:26:16.983252  527485 round_trippers.go:429] Request Headers:
	I0526 21:26:16.983257  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:26:16.983262  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:26:16.986689  527485 round_trippers.go:448] Response Status: 200 OK in 3 milliseconds
	I0526 21:26:16.986703  527485 round_trippers.go:451] Response Headers:
	I0526 21:26:16.986708  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:26:16.986712  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:26:16.986717  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:26:16.986721  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:26:16.986725  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:26:16 GMT
	I0526 21:26:16.987176  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-q7l2f","generateName":"kube-proxy-","namespace":"kube-system","uid":"8e75477a-14d2-46d9-8fa8-32dd3a2a4fc4","resourceVersion":"628","creationTimestamp":"2021-05-26T21:26:06Z","labels":{"controller-revision-hash":"b89db7f56","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"59f7a309-d89a-4050-8e82-fc8da888387f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:26:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"59f7a309-d89a-4050-8e82-fc8da888387f\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller"
:{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:affinity":{".":{ [truncated 5533 chars]
	I0526 21:26:17.182698  527485 request.go:591] Throttling request took 195.251879ms, request: GET:https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955-m02
	I0526 21:26:17.182829  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955-m02
	I0526 21:26:17.182846  527485 round_trippers.go:429] Request Headers:
	I0526 21:26:17.182858  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:26:17.182868  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:26:17.185282  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:26:17.185306  527485 round_trippers.go:451] Response Headers:
	I0526 21:26:17.185310  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:26:17.185314  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:26:17.185317  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:26:17.185321  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:26:17 GMT
	I0526 21:26:17.185353  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:26:17.185464  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955-m02","uid":"67a94a89-b2ce-441e-8b6d-a1198d6b46b1","resourceVersion":"643","creationTimestamp":"2021-05-26T21:26:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:26:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","t
ime":"2021-05-26T21:26:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta [truncated 5328 chars]
	I0526 21:26:17.185655  527485 pod_ready.go:92] pod "kube-proxy-q7l2f" in "kube-system" namespace has status "Ready":"True"
	I0526 21:26:17.185665  527485 pod_ready.go:81] duration metric: took 361.847259ms waiting for pod "kube-proxy-q7l2f" in "kube-system" namespace to be "Ready" ...
	I0526 21:26:17.185673  527485 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qbl42" in "kube-system" namespace to be "Ready" ...
	I0526 21:26:17.383075  527485 request.go:591] Throttling request took 197.367812ms, request: GET:https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qbl42
	I0526 21:26:17.383116  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qbl42
	I0526 21:26:17.383123  527485 round_trippers.go:429] Request Headers:
	I0526 21:26:17.383127  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:26:17.383144  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:26:17.385686  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:26:17.385701  527485 round_trippers.go:451] Response Headers:
	I0526 21:26:17.385706  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:26:17.385711  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:26:17 GMT
	I0526 21:26:17.385715  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:26:17.385719  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:26:17.385724  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:26:17.386210  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-qbl42","generateName":"kube-proxy-","namespace":"kube-system","uid":"950a915d-c5f0-4e6f-bc12-ee97013032f0","resourceVersion":"453","creationTimestamp":"2021-05-26T21:23:53Z","labels":{"controller-revision-hash":"b89db7f56","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"59f7a309-d89a-4050-8e82-fc8da888387f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"59f7a309-d89a-4050-8e82-fc8da888387f\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller"
:{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:affinity":{".":{ [truncated 5529 chars]
	I0526 21:26:17.582744  527485 request.go:591] Throttling request took 196.212706ms, request: GET:https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:26:17.582878  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:26:17.582918  527485 round_trippers.go:429] Request Headers:
	I0526 21:26:17.582938  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:26:17.582957  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:26:17.586038  527485 round_trippers.go:448] Response Status: 200 OK in 3 milliseconds
	I0526 21:26:17.586053  527485 round_trippers.go:451] Response Headers:
	I0526 21:26:17.586059  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:26:17.586064  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:26:17.586068  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:26:17.586072  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:26:17.586077  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:26:17 GMT
	I0526 21:26:17.586421  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:26:17.586719  527485 pod_ready.go:92] pod "kube-proxy-qbl42" in "kube-system" namespace has status "Ready":"True"
	I0526 21:26:17.586735  527485 pod_ready.go:81] duration metric: took 401.054991ms waiting for pod "kube-proxy-qbl42" in "kube-system" namespace to be "Ready" ...
	I0526 21:26:17.586747  527485 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-20210526212238-510955" in "kube-system" namespace to be "Ready" ...
	I0526 21:26:17.783036  527485 request.go:591] Throttling request took 196.229128ms, request: GET:https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-20210526212238-510955
	I0526 21:26:17.783077  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-20210526212238-510955
	I0526 21:26:17.783082  527485 round_trippers.go:429] Request Headers:
	I0526 21:26:17.783086  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:26:17.783091  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:26:17.785449  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:26:17.785465  527485 round_trippers.go:451] Response Headers:
	I0526 21:26:17.785474  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:26:17 GMT
	I0526 21:26:17.785480  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:26:17.785485  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:26:17.785491  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:26:17.785496  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:26:17.785637  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-20210526212238-510955","namespace":"kube-system","uid":"66bb91fe-7af2-400f-a477-fe2dc3428e83","resourceVersion":"547","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"6b4a0ee8b3d15a1c2e47c15d32e6eb0d","kubernetes.io/config.mirror":"6b4a0ee8b3d15a1c2e47c15d32e6eb0d","kubernetes.io/config.seen":"2021-05-26T21:23:43.638976446Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:25:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:
kubernetes.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:la [truncated 4552 chars]
	I0526 21:26:17.983216  527485 request.go:591] Throttling request took 197.353257ms, request: GET:https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:26:17.983271  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:26:17.983278  527485 round_trippers.go:429] Request Headers:
	I0526 21:26:17.983287  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:26:17.983295  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:26:17.986203  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:26:17.986220  527485 round_trippers.go:451] Response Headers:
	I0526 21:26:17.986226  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:26:17.986231  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:26:17 GMT
	I0526 21:26:17.986236  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:26:17.986241  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:26:17.986245  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:26:17.986391  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:26:17.986648  527485 pod_ready.go:92] pod "kube-scheduler-multinode-20210526212238-510955" in "kube-system" namespace has status "Ready":"True"
	I0526 21:26:17.986659  527485 pod_ready.go:81] duration metric: took 399.904203ms waiting for pod "kube-scheduler-multinode-20210526212238-510955" in "kube-system" namespace to be "Ready" ...
	I0526 21:26:17.986668  527485 pod_ready.go:38] duration metric: took 1.198598504s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0526 21:26:17.986690  527485 system_svc.go:44] waiting for kubelet service to be running ....
	I0526 21:26:17.986746  527485 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0526 21:26:17.997734  527485 system_svc.go:56] duration metric: took 11.038645ms WaitForService to wait for kubelet.
	I0526 21:26:17.997761  527485 kubeadm.go:547] duration metric: took 10.255571644s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0526 21:26:17.997798  527485 node_conditions.go:102] verifying NodePressure condition ...
	I0526 21:26:18.183268  527485 request.go:591] Throttling request took 185.408975ms, request: GET:https://192.168.39.229:8443/api/v1/nodes
	I0526 21:26:18.183312  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes
	I0526 21:26:18.183317  527485 round_trippers.go:429] Request Headers:
	I0526 21:26:18.183324  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:26:18.183329  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:26:18.185874  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:26:18.185890  527485 round_trippers.go:451] Response Headers:
	I0526 21:26:18.185894  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:26:18.185898  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:26:18.185901  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:26:18.185904  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:26:18.185908  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:26:18 GMT
	I0526 21:26:18.186054  527485 request.go:1107] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"645"},"items":[{"metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager
":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T2 [truncated 12475 chars]
	I0526 21:26:18.186429  527485 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0526 21:26:18.186450  527485 node_conditions.go:123] node cpu capacity is 2
	I0526 21:26:18.186463  527485 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0526 21:26:18.186475  527485 node_conditions.go:123] node cpu capacity is 2
	I0526 21:26:18.186482  527485 node_conditions.go:105] duration metric: took 188.674076ms to run NodePressure ...
	I0526 21:26:18.186496  527485 start.go:214] waiting for startup goroutines ...
	I0526 21:26:18.227617  527485 start.go:462] kubectl: 1.20.5, cluster: 1.20.2 (minor skew: 0)
	I0526 21:26:18.230027  527485 out.go:170] * Done! kubectl is now configured to use "multinode-20210526212238-510955" cluster and "default" namespace by default
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	65b1b69ac45a2       8c811b4aec35f       2 minutes ago       Running             busybox                   0                   1072cb707c440
	a9593dff4428d       bfe3a36ebd252       4 minutes ago       Running             coredns                   0                   1d96eb581f035
	5d3df8c94eaed       6e38f40d628db       4 minutes ago       Running             storage-provisioner       0                   722b1b257c571
	69df1859ce4d1       6de166512aa22       4 minutes ago       Running             kindnet-cni               0                   53490c652b9e5
	de6efc6fec4b2       43154ddb57a83       4 minutes ago       Running             kube-proxy                0                   038c42970362d
	c8538106e966b       0369cf4303ffd       5 minutes ago       Running             etcd                      0                   2ad404c6a9c44
	e6bb9bee7539a       ed2c44fbdd78b       5 minutes ago       Running             kube-scheduler            0                   24fd8b8599a6e
	2314e41b1b443       a27166429d98e       5 minutes ago       Running             kube-controller-manager   0                   73ada73fbbf0b
	a0581c0e5409b       a8c2fdb8bf76e       5 minutes ago       Running             kube-apiserver            0                   fe43674906f20
	
	* 
	* ==> containerd <==
	* -- Logs begin at Wed 2021-05-26 21:22:49 UTC, end at Wed 2021-05-26 21:28:50 UTC. --
	May 26 21:26:22 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:26:22.309661198Z" level=info msg="Exec process \"f7f5df022fa6389fc48c539e9c176c4764ca2c7f56c65e77b4e5eef36dbe3de5\" exits with exit code 0 and error <nil>"
	May 26 21:26:22 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:26:22.309692729Z" level=info msg="Finish piping \"stdout\" of container exec \"f7f5df022fa6389fc48c539e9c176c4764ca2c7f56c65e77b4e5eef36dbe3de5\""
	May 26 21:26:22 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:26:22.309725890Z" level=info msg="Finish piping \"stderr\" of container exec \"f7f5df022fa6389fc48c539e9c176c4764ca2c7f56c65e77b4e5eef36dbe3de5\""
	May 26 21:26:22 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:26:22.687053254Z" level=info msg="Exec for \"65b1b69ac45a25fa6e0343c53311b36ad1009ff19ed496df87c4c2cbf14e792c\" with command [nslookup kubernetes.default], tty false and stdin false"
	May 26 21:26:22 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:26:22.687633785Z" level=info msg="Exec for \"65b1b69ac45a25fa6e0343c53311b36ad1009ff19ed496df87c4c2cbf14e792c\" returns URL \"http://192.168.122.92:10010/exec/YMpM3wxp\""
	May 26 21:26:22 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:26:22.764350132Z" level=info msg="Finish piping \"stdout\" of container exec \"f9f4b085c72d5a55e4a968a84b91407c68bf11856e2cc297476dfb714eac4ee2\""
	May 26 21:26:22 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:26:22.764583210Z" level=info msg="Finish piping \"stderr\" of container exec \"f9f4b085c72d5a55e4a968a84b91407c68bf11856e2cc297476dfb714eac4ee2\""
	May 26 21:26:22 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:26:22.765539605Z" level=info msg="Exec process \"f9f4b085c72d5a55e4a968a84b91407c68bf11856e2cc297476dfb714eac4ee2\" exits with exit code 0 and error <nil>"
	May 26 21:26:23 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:26:23.146873242Z" level=info msg="Exec for \"65b1b69ac45a25fa6e0343c53311b36ad1009ff19ed496df87c4c2cbf14e792c\" with command [nslookup kubernetes.default.svc.cluster.local], tty false and stdin false"
	May 26 21:26:23 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:26:23.147013185Z" level=info msg="Exec for \"65b1b69ac45a25fa6e0343c53311b36ad1009ff19ed496df87c4c2cbf14e792c\" returns URL \"http://192.168.122.92:10010/exec/x8dPJXEC\""
	May 26 21:26:23 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:26:23.231284522Z" level=info msg="Exec process \"b548f73424ce2298287dcd89720ef9f6b3bba8f0f2f9492315835164a109ba60\" exits with exit code 0 and error <nil>"
	May 26 21:26:23 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:26:23.232260472Z" level=info msg="Finish piping \"stdout\" of container exec \"b548f73424ce2298287dcd89720ef9f6b3bba8f0f2f9492315835164a109ba60\""
	May 26 21:26:23 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:26:23.232712636Z" level=info msg="Finish piping \"stderr\" of container exec \"b548f73424ce2298287dcd89720ef9f6b3bba8f0f2f9492315835164a109ba60\""
	May 26 21:26:23 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:26:23.731169077Z" level=info msg="Exec for \"65b1b69ac45a25fa6e0343c53311b36ad1009ff19ed496df87c4c2cbf14e792c\" with command [sh -c nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3], tty false and stdin false"
	May 26 21:26:23 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:26:23.731227728Z" level=info msg="Exec for \"65b1b69ac45a25fa6e0343c53311b36ad1009ff19ed496df87c4c2cbf14e792c\" returns URL \"http://192.168.122.92:10010/exec/p47VCeNE\""
	May 26 21:26:23 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:26:23.818927899Z" level=info msg="Exec process \"2492d26147794af576f041ece1f96ff9b4387411e8a7ebe1aebab298c37ba305\" exits with exit code 0 and error <nil>"
	May 26 21:26:23 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:26:23.819225501Z" level=info msg="Finish piping \"stdout\" of container exec \"2492d26147794af576f041ece1f96ff9b4387411e8a7ebe1aebab298c37ba305\""
	May 26 21:26:23 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:26:23.820587364Z" level=info msg="Finish piping \"stderr\" of container exec \"2492d26147794af576f041ece1f96ff9b4387411e8a7ebe1aebab298c37ba305\""
	May 26 21:27:13 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:27:13.755580743Z" level=info msg="RemoveImage \"minikube-local-cache-test:functional-20210526211257-510955\""
	May 26 21:27:13 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:27:13.761220885Z" level=info msg="ImageDelete event &ImageDelete{Name:sha256:d019ff3187ef5660d1df17b8caf469d5fc50b72267134348e040397c4d49d830,XXX_unrecognized:[],}"
	May 26 21:27:13 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:27:13.762973064Z" level=info msg="ImageDelete event &ImageDelete{Name:docker.io/library/minikube-local-cache-test:functional-20210526211257-510955,XXX_unrecognized:[],}"
	May 26 21:27:13 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:27:13.769922070Z" level=info msg="RemoveImage \"minikube-local-cache-test:functional-20210526211257-510955\" returns successfully"
	May 26 21:27:14 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:27:14.144764721Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/library/minikube-local-cache-test:functional-20210526211257-510955,Labels:map[string]string{},XXX_unrecognized:[],}"
	May 26 21:27:14 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:27:14.153374839Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d019ff3187ef5660d1df17b8caf469d5fc50b72267134348e040397c4d49d830,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
	May 26 21:27:14 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:27:14.153849282Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/library/minikube-local-cache-test:functional-20210526211257-510955,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
	
	* 
	* ==> coredns [a9593dff4428d4d0f5cc31832cefb42bf39b62a98d32d295dd9502a1ef6d307a] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = 8f51b271a18f2ce6fcaee5f1cfda3ed0
	CoreDNS-1.7.0
	linux/amd64, go1.14.4, f59c03d
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-20210526212238-510955
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-20210526212238-510955
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1440f8d7119ca73787e7dc88324b0d13449454ff
	                    minikube.k8s.io/name=multinode-20210526212238-510955
	                    minikube.k8s.io/updated_at=2021_05_26T21_23_38_0700
	                    minikube.k8s.io/version=v1.20.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 26 May 2021 21:23:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-20210526212238-510955
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 26 May 2021 21:28:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 26 May 2021 21:27:44 +0000   Wed, 26 May 2021 21:23:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 26 May 2021 21:27:44 +0000   Wed, 26 May 2021 21:23:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 26 May 2021 21:27:44 +0000   Wed, 26 May 2021 21:23:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 26 May 2021 21:27:44 +0000   Wed, 26 May 2021 21:24:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.229
	  Hostname:    multinode-20210526212238-510955
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2186320Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2186320Ki
	  pods:               110
	System Info:
	  Machine ID:                 fbd77f9e2b0d4ce7860fb21881bb7ff3
	  System UUID:                fbd77f9e-2b0d-4ce7-860f-b21881bb7ff3
	  Boot ID:                    9a60591c-de07-4474-bb32-101b0a9643ff
	  Kernel Version:             4.19.182
	  OS Image:                   Buildroot 2020.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.4.4
	  Kubelet Version:            v1.20.2
	  Kube-Proxy Version:         v1.20.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-6cd5ff77cb-4g265                                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m31s
	  kube-system                 coredns-74ff55c5b-tw67b                                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (7%!)(MISSING)     4m57s
	  kube-system                 etcd-multinode-20210526212238-510955                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m6s
	  kube-system                 kindnet-2wgbs                                              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m57s
	  kube-system                 kube-apiserver-multinode-20210526212238-510955             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m6s
	  kube-system                 kube-controller-manager-multinode-20210526212238-510955    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m6s
	  kube-system                 kube-proxy-qbl42                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m57s
	  kube-system                 kube-scheduler-multinode-20210526212238-510955             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m6s
	  kube-system                 storage-provisioner                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  100Mi (0%!)(MISSING)   0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 5m23s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m22s (x4 over 5m23s)  kubelet     Node multinode-20210526212238-510955 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m22s (x3 over 5m23s)  kubelet     Node multinode-20210526212238-510955 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m22s (x3 over 5m23s)  kubelet     Node multinode-20210526212238-510955 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m22s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 5m7s                   kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m6s                   kubelet     Node multinode-20210526212238-510955 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m6s                   kubelet     Node multinode-20210526212238-510955 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m6s                   kubelet     Node multinode-20210526212238-510955 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m6s                   kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 4m56s                  kube-proxy  Starting kube-proxy.
	  Normal  NodeReady                4m46s                  kubelet     Node multinode-20210526212238-510955 status is now: NodeReady
	
	
	Name:               multinode-20210526212238-510955-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-20210526212238-510955-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 26 May 2021 21:26:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-20210526212238-510955-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 26 May 2021 21:28:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 26 May 2021 21:26:36 +0000   Wed, 26 May 2021 21:26:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 26 May 2021 21:26:36 +0000   Wed, 26 May 2021 21:26:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 26 May 2021 21:26:36 +0000   Wed, 26 May 2021 21:26:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 26 May 2021 21:26:36 +0000   Wed, 26 May 2021 21:26:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.87
	  Hostname:    multinode-20210526212238-510955-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2186320Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2186320Ki
	  pods:               110
	System Info:
	  Machine ID:                 8f4ce45cafcc4968b1990f7d389bdc28
	  System UUID:                8f4ce45c-afcc-4968-b199-0f7d389bdc28
	  Boot ID:                    b644d687-3a13-4a74-8cd4-87bdfa46d2ca
	  Kernel Version:             4.19.182
	  OS Image:                   Buildroot 2020.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.4.4
	  Kubelet Version:            v1.20.2
	  Kube-Proxy Version:         v1.20.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-6cd5ff77cb-dlslt    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m31s
	  kube-system                 kindnet-wvlst               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      2m44s
	  kube-system                 kube-proxy-q7l2f            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m44s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 2m44s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m44s (x2 over 2m44s)  kubelet     Node multinode-20210526212238-510955-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m44s (x2 over 2m44s)  kubelet     Node multinode-20210526212238-510955-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m44s (x2 over 2m44s)  kubelet     Node multinode-20210526212238-510955-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m44s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 2m42s                  kube-proxy  Starting kube-proxy.
	  Normal  NodeReady                2m34s                  kubelet     Node multinode-20210526212238-510955-m02 status is now: NodeReady
	
	
	Name:               multinode-20210526212238-510955-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-20210526212238-510955-m03
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 26 May 2021 21:27:13 +0000
	Taints:             node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-20210526212238-510955-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 26 May 2021 21:27:23 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Wed, 26 May 2021 21:27:23 +0000   Wed, 26 May 2021 21:28:08 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Wed, 26 May 2021 21:27:23 +0000   Wed, 26 May 2021 21:28:08 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Wed, 26 May 2021 21:27:23 +0000   Wed, 26 May 2021 21:28:08 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Wed, 26 May 2021 21:27:23 +0000   Wed, 26 May 2021 21:28:08 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.18
	  Hostname:    multinode-20210526212238-510955-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2186496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2186496Ki
	  pods:               110
	System Info:
	  Machine ID:                 8af9c1527ba34d8e88af1625a32590a7
	  System UUID:                8af9c152-7ba3-4d8e-88af-1625a32590a7
	  Boot ID:                    23ed8454-1f5f-4bcb-92e6-24b7647d8dac
	  Kernel Version:             4.19.182
	  OS Image:                   Buildroot 2020.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.4.4
	  Kubelet Version:            v1.20.2
	  Kube-Proxy Version:         v1.20.2
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-b75lx       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      97s
	  kube-system                 kube-proxy-ftdx6    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         97s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From        Message
	  ----    ------                   ----               ----        -------
	  Normal  Starting                 97s                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  97s (x2 over 97s)  kubelet     Node multinode-20210526212238-510955-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    97s (x2 over 97s)  kubelet     Node multinode-20210526212238-510955-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     97s (x2 over 97s)  kubelet     Node multinode-20210526212238-510955-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  97s                kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 95s                kube-proxy  Starting kube-proxy.
	  Normal  NodeReady                87s                kubelet     Node multinode-20210526212238-510955-m03 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [May26 21:22] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.092301] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +3.726361] Unstable clock detected, switching default tracing clock to "global"
	              If you want to keep using the local clock, then add:
	                "trace_clock=local"
	              on the kernel command line
	[  +0.000018] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.393840] systemd-fstab-generator[1161]: Ignoring "noauto" for root device
	[  +0.034647] systemd[1]: system-getty.slice: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000003] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +0.775022] SELinux: unrecognized netlink message: protocol=0 nlmsg_type=106 sclass=netlink_route_socket pid=1723 comm=systemd-network
	[  +1.684954] vboxguest: loading out-of-tree module taints kernel.
	[  +0.006011] vboxguest: PCI device not found, probably running on physical hardware.
	[  +1.532510] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack.
	[May26 21:23] systemd-fstab-generator[2097]: Ignoring "noauto" for root device
	[  +0.282151] systemd-fstab-generator[2145]: Ignoring "noauto" for root device
	[  +9.202259] systemd-fstab-generator[2335]: Ignoring "noauto" for root device
	[ +16.373129] systemd-fstab-generator[2754]: Ignoring "noauto" for root device
	[ +16.598445] kauditd_printk_skb: 38 callbacks suppressed
	[May26 21:24] kauditd_printk_skb: 50 callbacks suppressed
	[ +45.152218] NFSD: Unable to end grace period: -110
	
	* 
	* ==> etcd [c8538106e966bebc2d596c1688fdb11c413a37e122bd18c454b4db5bec7f55ad] <==
	* 2021-05-26 21:27:02.965139 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "error:context canceled" took too long (1.999996954s) to execute
	WARNING: 2021/05/26 21:27:02 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2021-05-26 21:27:03.281976 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "error:context deadline exceeded" took too long (2.000031526s) to execute
	2021-05-26 21:27:03.612927 W | wal: sync duration of 1.019180522s, expected less than 1s
	2021-05-26 21:27:03.613353 W | etcdserver: request "header:<ID:7886218195963551091 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/kube-apiserver-multinode-20210526212238-510955.1682baefe39290fe\" mod_revision:582 > success:<request_put:<key:\"/registry/events/kube-system/kube-apiserver-multinode-20210526212238-510955.1682baefe39290fe\" value_size:762 lease:7886218195963551089 >> failure:<request_range:<key:\"/registry/events/kube-system/kube-apiserver-multinode-20210526212238-510955.1682baefe39290fe\" > >>" with result "size:16" took too long (1.160121773s) to execute
	2021-05-26 21:27:03.614951 W | etcdserver: read-only range request "key:\"/registry/poddisruptionbudgets/\" range_end:\"/registry/poddisruptionbudgets0\" count_only:true " with result "range_response_count:0 size:5" took too long (2.525487318s) to execute
	2021-05-26 21:27:03.615643 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (322.673894ms) to execute
	2021-05-26 21:27:03.616035 W | etcdserver: read-only range request "key:\"/registry/jobs/\" range_end:\"/registry/jobs0\" limit:500 " with result "range_response_count:0 size:5" took too long (993.098662ms) to execute
	2021-05-26 21:27:03.616361 W | etcdserver: read-only range request "key:\"/registry/rolebindings/\" range_end:\"/registry/rolebindings0\" count_only:true " with result "range_response_count:0 size:7" took too long (1.443267942s) to execute
	2021-05-26 21:27:03.617223 W | etcdserver: read-only range request "key:\"/registry/clusterrolebindings/\" range_end:\"/registry/clusterrolebindings0\" count_only:true " with result "range_response_count:0 size:7" took too long (1.705993682s) to execute
	2021-05-26 21:27:03.618140 W | etcdserver: read-only range request "key:\"/registry/minions/\" range_end:\"/registry/minions0\" " with result "range_response_count:2 size:11041" took too long (2.129214129s) to execute
	2021-05-26 21:27:05.247138 W | wal: sync duration of 1.601172104s, expected less than 1s
	2021-05-26 21:27:05.334917 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (1.052221969s) to execute
	2021-05-26 21:27:10.917398 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-05-26 21:27:13.651150 W | etcdserver: read-only range request "key:\"/registry/limitranges/kube-system/\" range_end:\"/registry/limitranges/kube-system0\" " with result "range_response_count:0 size:5" took too long (116.920368ms) to execute
	2021-05-26 21:27:13.658995 W | etcdserver: read-only range request "key:\"/registry/limitranges/kube-system/\" range_end:\"/registry/limitranges/kube-system0\" " with result "range_response_count:0 size:5" took too long (122.772657ms) to execute
	2021-05-26 21:27:20.917297 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-05-26 21:27:30.917876 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-05-26 21:27:40.917425 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-05-26 21:27:50.917259 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-05-26 21:28:00.916814 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-05-26 21:28:10.917703 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-05-26 21:28:20.917068 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-05-26 21:28:30.916531 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-05-26 21:28:40.917394 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	* 
	* ==> kernel <==
	*  21:28:50 up 6 min,  0 users,  load average: 0.41, 0.50, 0.25
	Linux multinode-20210526212238-510955 4.19.182 #1 SMP Wed May 5 21:20:39 UTC 2021 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2020.02.12"
	
	* 
	* ==> kube-apiserver [a0581c0e5409bf38a71c61d4cf776a7fcd3dc38c24a5a6a52165552afd2bc85c] <==
	* Trace[554095935]: [3.323753444s] [3.323753444s] END
	I0526 21:27:03.621063       1 trace.go:205] Trace[1564844825]: "Patch" url:/api/v1/namespaces/kube-system/events/kube-apiserver-multinode-20210526212238-510955.1682baefe39290fe,user-agent:kubelet/v1.20.2 (linux/amd64) kubernetes/faecb19,client:192.168.39.229 (26-May-2021 21:27:00.296) (total time: 3324ms):
	Trace[1564844825]: ---"Object stored in database" 3288ms (21:27:00.620)
	Trace[1564844825]: [3.324115066s] [3.324115066s] END
	I0526 21:27:03.622888       1 trace.go:205] Trace[828582385]: "GuaranteedUpdate etcd3" type:*core.Endpoints (26-May-2021 21:27:00.341) (total time: 3281ms):
	Trace[828582385]: ---"Transaction committed" 3281ms (21:27:00.622)
	Trace[828582385]: [3.281692282s] [3.281692282s] END
	I0526 21:27:03.623162       1 trace.go:205] Trace[1847006298]: "Update" url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,client:192.168.39.229 (26-May-2021 21:27:00.340) (total time: 3282ms):
	Trace[1847006298]: ---"Object stored in database" 3281ms (21:27:00.623)
	Trace[1847006298]: [3.282217035s] [3.282217035s] END
	I0526 21:27:03.631154       1 trace.go:205] Trace[2146471206]: "List" url:/api/v1/nodes,user-agent:kindnetd/v0.0.0 (linux/amd64) kubernetes/$Format,client:192.168.39.87 (26-May-2021 21:27:01.487) (total time: 2143ms):
	Trace[2146471206]: ---"Listing from storage done" 2132ms (21:27:00.620)
	Trace[2146471206]: [2.143426392s] [2.143426392s] END
	I0526 21:27:05.336213       1 trace.go:205] Trace[1522091935]: "Create" url:/api/v1/namespaces/kube-system/events,user-agent:kubelet/v1.20.2 (linux/amd64) kubernetes/faecb19,client:192.168.39.229 (26-May-2021 21:27:03.648) (total time: 1687ms):
	Trace[1522091935]: ---"Object stored in database" 1687ms (21:27:00.336)
	Trace[1522091935]: [1.687793823s] [1.687793823s] END
	I0526 21:27:19.889601       1 client.go:360] parsed scheme: "passthrough"
	I0526 21:27:19.889757       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0526 21:27:19.889783       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0526 21:27:58.932349       1 client.go:360] parsed scheme: "passthrough"
	I0526 21:27:58.933036       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0526 21:27:58.933326       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0526 21:28:30.988777       1 client.go:360] parsed scheme: "passthrough"
	I0526 21:28:30.989065       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0526 21:28:30.989089       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	* 
	* ==> kube-controller-manager [2314e41b1b44395e7b5a7c32d6859941d4a5db601171f0dd18ddefb571d25f18] <==
	* I0526 21:23:53.906201       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0526 21:23:53.937294       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0526 21:23:53.937309       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0526 21:24:08.320331       1 node_lifecycle_controller.go:1222] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	W0526 21:26:06.517135       1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-20210526212238-510955-m02" does not exist
	I0526 21:26:06.674802       1 range_allocator.go:373] Set node multinode-20210526212238-510955-m02 PodCIDR to [10.244.1.0/24]
	I0526 21:26:06.700780       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-wvlst"
	I0526 21:26:06.703138       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-q7l2f"
	E0526 21:26:06.758329       1 daemon_controller.go:320] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"59f7a309-d89a-4050-8e82-fc8da888387f", ResourceVersion:"454", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63757661018, loc:(*time.Location)(0x6f31360)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc000d4fde0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc000d4fe00)}, v1.ManagedFieldsEntry{Manager:"kube-co
ntroller-manager", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc000d4fe20), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc000d4fe40)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc000d4fe60), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElastic
BlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc00137d640), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSour
ce)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc000d4fe80), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSo
urce)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc000d4fea0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil),
Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.20.2", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil),
WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc000d4fee0)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"F
ile", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc001a28060), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc000ecae78), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000cd1030), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)
(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc000107048)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc000ecb108)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:1, NumberMisscheduled:0, DesiredNumberScheduled:1, NumberReady:1, ObservedGeneration:1, UpdatedNumberScheduled:1, NumberAvailable:1, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest ve
rsion and try again
	E0526 21:26:06.766354       1 daemon_controller.go:320] kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kindnet", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"c6806fba-0252-46f8-bc69-c8732fdb46d7", ResourceVersion:"472", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63757661018, loc:(*time.Location)(0x6f31360)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"},\"name\":\"kindnet\",\"namespace\":\"kube-system\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"k
indnet\"}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"}},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"HOST_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.hostIP\"}}},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}},{\"name\":\"POD_SUBNET\",\"value\":\"10.244.0.0/16\"}],\"image\":\"kindest/kindnetd:v20210326-1e038dc5\",\"name\":\"kindnet-cni\",\"resources\":{\"limits\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"},\"requests\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"}},\"securityContext\":{\"capabilities\":{\"add\":[\"NET_RAW\",\"NET_ADMIN\"]},\"privileged\":false},\"volumeMounts\":[{\"mountPath\":\"/etc/cni/net.d\",\"name\":\"cni-cfg\"},{\"mountPath\":\"/run/xtables.lock\",\"name\":\"xtables-lock\",\"readOnly\":false},{\"mountPath\":\"/lib/modules\",\"name\":\"lib-modules\",\"readOnly\":true}]}],\"hostNetwork\":true,\"serviceAccountName\":\"kindnet\",\"tolerations\":[{\"effect\":\"NoSchedule\",\"operator\":\"Exists
\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/cni/net.mk\",\"type\":\"DirectoryOrCreate\"},\"name\":\"cni-cfg\"},{\"hostPath\":{\"path\":\"/run/xtables.lock\",\"type\":\"FileOrCreate\"},\"name\":\"xtables-lock\"},{\"hostPath\":{\"path\":\"/lib/modules\"},\"name\":\"lib-modules\"}]}}}}\n"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubectl-client-side-apply", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc000e377a0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc000e377c0)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc000e377e0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc000e37800)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc000e37820), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, Crea
tionTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"cni-cfg", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc000e37840), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.Flex
VolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc000e37860), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVo
lumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CS
IVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc000e37880), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*
v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kindnet-cni", Image:"kindest/kindnetd:v20210326-1e038dc5", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"HOST_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc000e378a0)}, v1.EnvVar{Name:"POD_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc000e378e0)}, v1.EnvVar{Name:"POD_SUBNET", Value:"10.244.0.0/16", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amou
nt{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"cni-cfg", ReadOnly:false, MountPath:"/etc/cni/net.d", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropa
gation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc001abaa80), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc000f870e8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"kindnet", DeprecatedServiceAccount:"kindnet", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000d00af0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(ni
l), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc000362bf8)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc000f87130)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:1, NumberMisscheduled:0, DesiredNumberScheduled:1, NumberReady:1, ObservedGeneration:1, UpdatedNumberScheduled:1, NumberAvailable:1, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetConditio
n(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kindnet": the object has been modified; please apply your changes to the latest version and try again
	E0526 21:26:06.798644       1 daemon_controller.go:320] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"59f7a309-d89a-4050-8e82-fc8da888387f", ResourceVersion:"605", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63757661018, loc:(*time.Location)(0x6f31360)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc000e36dc0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc000e36de0)}, v1.ManagedFieldsEntry{Manager:"kube-co
ntroller-manager", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc000e36e00), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc000e36e20)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc000e36e40), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElastic
BlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc0011a2dc0), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSour
ce)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc000e36e60), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSo
urce)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc000e36e80), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil),
Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.20.2", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil),
WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc000e36ec0)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"F
ile", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc001aba900), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc000f87c68), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000cc1f80), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)
(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc000a85510)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc000f87cb8)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:1, NumberMisscheduled:0, DesiredNumberScheduled:2, NumberReady:1, ObservedGeneration:1, UpdatedNumberScheduled:1, NumberAvailable:1, NumberUnavailable:1, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest ve
rsion and try again
	W0526 21:26:08.334957       1 node_lifecycle_controller.go:1044] Missing timestamp for Node multinode-20210526212238-510955-m02. Assuming now as a timestamp.
	I0526 21:26:08.335583       1 event.go:291] "Event occurred" object="multinode-20210526212238-510955-m02" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-20210526212238-510955-m02 event: Registered Node multinode-20210526212238-510955-m02 in Controller"
	I0526 21:26:19.087324       1 event.go:291] "Event occurred" object="default/busybox" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-6cd5ff77cb to 2"
	I0526 21:26:19.112764       1 event.go:291] "Event occurred" object="default/busybox-6cd5ff77cb" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-6cd5ff77cb-dlslt"
	I0526 21:26:19.134741       1 event.go:291] "Event occurred" object="default/busybox-6cd5ff77cb" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-6cd5ff77cb-4g265"
	W0526 21:27:13.522686       1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-20210526212238-510955-m03" does not exist
	I0526 21:27:13.673351       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-ftdx6"
	I0526 21:27:13.690542       1 range_allocator.go:373] Set node multinode-20210526212238-510955-m03 PodCIDR to [10.244.2.0/24]
	I0526 21:27:13.690722       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-b75lx"
	I0526 21:27:18.341118       1 event.go:291] "Event occurred" object="multinode-20210526212238-510955-m03" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-20210526212238-510955-m03 event: Registered Node multinode-20210526212238-510955-m03 in Controller"
	W0526 21:27:18.341335       1 node_lifecycle_controller.go:1044] Missing timestamp for Node multinode-20210526212238-510955-m03. Assuming now as a timestamp.
	I0526 21:28:08.360412       1 event.go:291] "Event occurred" object="multinode-20210526212238-510955-m03" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-20210526212238-510955-m03 status is now: NodeNotReady"
	I0526 21:28:08.369881       1 event.go:291] "Event occurred" object="kube-system/kube-proxy-ftdx6" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0526 21:28:08.387263       1 event.go:291] "Event occurred" object="kube-system/kindnet-b75lx" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	
	* 
	* ==> kube-proxy [de6efc6fec4b20903efb849817a9e57e284841a9fc6ef9bd1b83e563316cdaa2] <==
	* I0526 21:23:54.629702       1 node.go:172] Successfully retrieved node IP: 192.168.39.229
	I0526 21:23:54.629813       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.39.229), assume IPv4 operation
	W0526 21:23:54.677087       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0526 21:23:54.677377       1 server_others.go:185] Using iptables Proxier.
	I0526 21:23:54.678139       1 server.go:650] Version: v1.20.2
	I0526 21:23:54.678560       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
	I0526 21:23:54.678810       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
	I0526 21:23:54.680271       1 config.go:315] Starting service config controller
	I0526 21:23:54.680366       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0526 21:23:54.680391       1 config.go:224] Starting endpoint slice config controller
	I0526 21:23:54.680396       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0526 21:23:54.780835       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0526 21:23:54.780955       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [e6bb9bee7539aa8f41eb665706dd077c17b8c28a54090cf4fffe439978194c08] <==
	* W0526 21:23:34.796410       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0526 21:23:34.796897       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0526 21:23:34.861412       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0526 21:23:34.862415       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0526 21:23:34.861578       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0526 21:23:34.861594       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0526 21:23:34.865256       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0526 21:23:34.871182       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0526 21:23:34.871367       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0526 21:23:34.871423       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0526 21:23:34.873602       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0526 21:23:34.873877       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0526 21:23:34.874313       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0526 21:23:34.874540       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0526 21:23:34.875162       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0526 21:23:34.875282       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0526 21:23:34.878224       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0526 21:23:34.878386       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0526 21:23:35.699206       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0526 21:23:35.756603       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0526 21:23:35.804897       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0526 21:23:35.812802       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0526 21:23:35.981887       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0526 21:23:36.079577       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0526 21:23:38.862952       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Wed 2021-05-26 21:22:49 UTC, end at Wed 2021-05-26 21:28:50 UTC. --
	May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.350035    2767 reconciler.go:157] Reconciler: start to sync state
	May 26 21:23:49 multinode-20210526212238-510955 kubelet[2767]: E0526 21:23:49.171719    2767 kubelet.go:2163] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	May 26 21:23:53 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:53.286184    2767 kuberuntime_manager.go:1006] updating runtime config through cri with podcidr 10.244.0.0/24
	May 26 21:23:53 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:53.292064    2767 kubelet_network.go:77] Setting Pod CIDR:  -> 10.244.0.0/24
	May 26 21:23:53 multinode-20210526212238-510955 kubelet[2767]: E0526 21:23:53.297677    2767 kubelet.go:2163] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	May 26 21:23:53 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:53.473000    2767 topology_manager.go:187] [topologymanager] Topology Admit Handler
	May 26 21:23:53 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:53.588715    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "cni-cfg" (UniqueName: "kubernetes.io/host-path/aac3ff91-8f9c-4f4e-81fc-a859f780d67d-cni-cfg") pod "kindnet-2wgbs" (UID: "aac3ff91-8f9c-4f4e-81fc-a859f780d67d")
	May 26 21:23:53 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:53.589055    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/aac3ff91-8f9c-4f4e-81fc-a859f780d67d-xtables-lock") pod "kindnet-2wgbs" (UID: "aac3ff91-8f9c-4f4e-81fc-a859f780d67d")
	May 26 21:23:53 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:53.589618    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kindnet-token-zm2kt" (UniqueName: "kubernetes.io/secret/aac3ff91-8f9c-4f4e-81fc-a859f780d67d-kindnet-token-zm2kt") pod "kindnet-2wgbs" (UID: "aac3ff91-8f9c-4f4e-81fc-a859f780d67d")
	May 26 21:23:53 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:53.589842    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/aac3ff91-8f9c-4f4e-81fc-a859f780d67d-lib-modules") pod "kindnet-2wgbs" (UID: "aac3ff91-8f9c-4f4e-81fc-a859f780d67d")
	May 26 21:23:53 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:53.611915    2767 topology_manager.go:187] [topologymanager] Topology Admit Handler
	May 26 21:23:53 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:53.791552    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/950a915d-c5f0-4e6f-bc12-ee97013032f0-kube-proxy") pod "kube-proxy-qbl42" (UID: "950a915d-c5f0-4e6f-bc12-ee97013032f0")
	May 26 21:23:53 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:53.791755    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/950a915d-c5f0-4e6f-bc12-ee97013032f0-lib-modules") pod "kube-proxy-qbl42" (UID: "950a915d-c5f0-4e6f-bc12-ee97013032f0")
	May 26 21:23:53 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:53.791904    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy-token-xd4p4" (UniqueName: "kubernetes.io/secret/950a915d-c5f0-4e6f-bc12-ee97013032f0-kube-proxy-token-xd4p4") pod "kube-proxy-qbl42" (UID: "950a915d-c5f0-4e6f-bc12-ee97013032f0")
	May 26 21:23:53 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:53.792035    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/950a915d-c5f0-4e6f-bc12-ee97013032f0-xtables-lock") pod "kube-proxy-qbl42" (UID: "950a915d-c5f0-4e6f-bc12-ee97013032f0")
	May 26 21:23:54 multinode-20210526212238-510955 kubelet[2767]: E0526 21:23:54.172944    2767 kubelet.go:2163] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	May 26 21:23:56 multinode-20210526212238-510955 kubelet[2767]: E0526 21:23:56.623072    2767 cadvisor_stats_provider.go:401] Partial failure issuing cadvisor.ContainerInfoV2: partial failures: ["/kubepods/besteffort/pod950a915d-c5f0-4e6f-bc12-ee97013032f0/de6efc6fec4b20903efb849817a9e57e284841a9fc6ef9bd1b83e563316cdaa2": RecentStats: unable to find data in memory cache]
	May 26 21:24:08 multinode-20210526212238-510955 kubelet[2767]: I0526 21:24:08.993599    2767 topology_manager.go:187] [topologymanager] Topology Admit Handler
	May 26 21:24:09 multinode-20210526212238-510955 kubelet[2767]: I0526 21:24:09.010021    2767 topology_manager.go:187] [topologymanager] Topology Admit Handler
	May 26 21:24:09 multinode-20210526212238-510955 kubelet[2767]: I0526 21:24:09.159693    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp" (UniqueName: "kubernetes.io/host-path/e29f2487-1ac5-4bb4-9ab8-6b5d5c43ec36-tmp") pod "storage-provisioner" (UID: "e29f2487-1ac5-4bb4-9ab8-6b5d5c43ec36")
	May 26 21:24:09 multinode-20210526212238-510955 kubelet[2767]: I0526 21:24:09.159808    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-7ps8h" (UniqueName: "kubernetes.io/secret/a0522c32-9960-4c21-8a5a-d0b137009166-coredns-token-7ps8h") pod "coredns-74ff55c5b-tw67b" (UID: "a0522c32-9960-4c21-8a5a-d0b137009166")
	May 26 21:24:09 multinode-20210526212238-510955 kubelet[2767]: I0526 21:24:09.159830    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/a0522c32-9960-4c21-8a5a-d0b137009166-config-volume") pod "coredns-74ff55c5b-tw67b" (UID: "a0522c32-9960-4c21-8a5a-d0b137009166")
	May 26 21:24:09 multinode-20210526212238-510955 kubelet[2767]: I0526 21:24:09.159848    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "storage-provisioner-token-hgxxq" (UniqueName: "kubernetes.io/secret/e29f2487-1ac5-4bb4-9ab8-6b5d5c43ec36-storage-provisioner-token-hgxxq") pod "storage-provisioner" (UID: "e29f2487-1ac5-4bb4-9ab8-6b5d5c43ec36")
	May 26 21:26:19 multinode-20210526212238-510955 kubelet[2767]: I0526 21:26:19.145582    2767 topology_manager.go:187] [topologymanager] Topology Admit Handler
	May 26 21:26:19 multinode-20210526212238-510955 kubelet[2767]: I0526 21:26:19.201692    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-cdspv" (UniqueName: "kubernetes.io/secret/07eb6d05-7a0d-41b2-b7f5-13145e0edcdb-default-token-cdspv") pod "busybox-6cd5ff77cb-4g265" (UID: "07eb6d05-7a0d-41b2-b7f5-13145e0edcdb")
	
	* 
	* ==> storage-provisioner [5d3df8c94eaedafbc7bb1e79012cbc72bc2ec88687f4bfc6f10fff8c1eb3107d] <==
	* I0526 21:24:10.174152       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0526 21:24:10.283423       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0526 21:24:10.285296       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0526 21:24:10.325709       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0526 21:24:10.333080       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_multinode-20210526212238-510955_640f1575-3f2b-423b-9f51-48a3198dc1b4!
	I0526 21:24:10.329407       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"694e5be2-46cf-4c76-aeac-70628468e6a3", APIVersion:"v1", ResourceVersion:"496", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' multinode-20210526212238-510955_640f1575-3f2b-423b-9f51-48a3198dc1b4 became leader
	I0526 21:24:10.440994       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_multinode-20210526212238-510955_640f1575-3f2b-423b-9f51-48a3198dc1b4!
	

                                                
                                                
-- /stdout --
helpers_test.go:250: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-20210526212238-510955 -n multinode-20210526212238-510955
helpers_test.go:257: (dbg) Run:  kubectl --context multinode-20210526212238-510955 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:263: non-running pods: 
helpers_test.go:265: ======> post-mortem[TestMultiNode/serial/StopNode]: describe non-running pods <======
helpers_test.go:268: (dbg) Run:  kubectl --context multinode-20210526212238-510955 describe pod 
helpers_test.go:268: (dbg) Non-zero exit: kubectl --context multinode-20210526212238-510955 describe pod : exit status 1 (45.580342ms)

                                                
                                                
** stderr ** 
	error: resource name may not be empty

                                                
                                                
** /stderr **
helpers_test.go:270: kubectl --context multinode-20210526212238-510955 describe pod : exit status 1
--- FAIL: TestMultiNode/serial/StopNode (85.68s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (10.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210526212238-510955 node start m03 --alsologtostderr
multinode_test.go:241: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210526212238-510955 status
multinode_test.go:241: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20210526212238-510955 status: exit status 3 (3.547700114s)

                                                
                                                
-- stdout --
	multinode-20210526212238-510955
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20210526212238-510955-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20210526212238-510955-m03
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0526 21:28:57.721395  529246 status.go:374] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.18:22: connect: no route to host
	E0526 21:28:57.721476  529246 status.go:258] status error: NewSession: new client: new client: dial tcp 192.168.39.18:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:243: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-20210526212238-510955 status" : exit status 3
helpers_test.go:218: -----------------------post-mortem--------------------------------
helpers_test.go:235: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-20210526212238-510955 -n multinode-20210526212238-510955
helpers_test.go:240: <<< TestMultiNode/serial/StartAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:241: ======>  post-mortem[TestMultiNode/serial/StartAfterStop]: minikube logs <======
helpers_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210526212238-510955 logs -n 25
helpers_test.go:243: (dbg) Done: out/minikube-linux-amd64 -p multinode-20210526212238-510955 logs -n 25: (3.048133764s)
helpers_test.go:248: TestMultiNode/serial/StartAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------------|---------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                             Args                             |             Profile             |  User   | Version |          Start Time           |           End Time            |
	|---------|--------------------------------------------------------------|---------------------------------|---------|---------|-------------------------------|-------------------------------|
	| kubectl | -p multinode-20210526212238-510955 -- apply -f               | multinode-20210526212238-510955 | jenkins | v1.20.0 | Wed, 26 May 2021 21:26:18 UTC | Wed, 26 May 2021 21:26:19 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml            |                                 |         |         |                               |                               |
	| kubectl | -p                                                           | multinode-20210526212238-510955 | jenkins | v1.20.0 | Wed, 26 May 2021 21:26:19 UTC | Wed, 26 May 2021 21:26:21 UTC |
	|         | multinode-20210526212238-510955                              |                                 |         |         |                               |                               |
	|         | -- rollout status                                            |                                 |         |         |                               |                               |
	|         | deployment/busybox                                           |                                 |         |         |                               |                               |
	| kubectl | -p multinode-20210526212238-510955                           | multinode-20210526212238-510955 | jenkins | v1.20.0 | Wed, 26 May 2021 21:26:21 UTC | Wed, 26 May 2021 21:26:21 UTC |
	|         | -- get pods -o                                               |                                 |         |         |                               |                               |
	|         | jsonpath='{.items[*].status.podIP}'                          |                                 |         |         |                               |                               |
	| kubectl | -p multinode-20210526212238-510955                           | multinode-20210526212238-510955 | jenkins | v1.20.0 | Wed, 26 May 2021 21:26:21 UTC | Wed, 26 May 2021 21:26:21 UTC |
	|         | -- get pods -o                                               |                                 |         |         |                               |                               |
	|         | jsonpath='{.items[*].metadata.name}'                         |                                 |         |         |                               |                               |
	| kubectl | -p                                                           | multinode-20210526212238-510955 | jenkins | v1.20.0 | Wed, 26 May 2021 21:26:21 UTC | Wed, 26 May 2021 21:26:22 UTC |
	|         | multinode-20210526212238-510955                              |                                 |         |         |                               |                               |
	|         | -- exec                                                      |                                 |         |         |                               |                               |
	|         | busybox-6cd5ff77cb-4g265 --                                  |                                 |         |         |                               |                               |
	|         | nslookup kubernetes.io                                       |                                 |         |         |                               |                               |
	| kubectl | -p                                                           | multinode-20210526212238-510955 | jenkins | v1.20.0 | Wed, 26 May 2021 21:26:22 UTC | Wed, 26 May 2021 21:26:22 UTC |
	|         | multinode-20210526212238-510955                              |                                 |         |         |                               |                               |
	|         | -- exec                                                      |                                 |         |         |                               |                               |
	|         | busybox-6cd5ff77cb-dlslt --                                  |                                 |         |         |                               |                               |
	|         | nslookup kubernetes.io                                       |                                 |         |         |                               |                               |
	| kubectl | -p                                                           | multinode-20210526212238-510955 | jenkins | v1.20.0 | Wed, 26 May 2021 21:26:22 UTC | Wed, 26 May 2021 21:26:22 UTC |
	|         | multinode-20210526212238-510955                              |                                 |         |         |                               |                               |
	|         | -- exec                                                      |                                 |         |         |                               |                               |
	|         | busybox-6cd5ff77cb-4g265 --                                  |                                 |         |         |                               |                               |
	|         | nslookup kubernetes.default                                  |                                 |         |         |                               |                               |
	| kubectl | -p                                                           | multinode-20210526212238-510955 | jenkins | v1.20.0 | Wed, 26 May 2021 21:26:22 UTC | Wed, 26 May 2021 21:26:23 UTC |
	|         | multinode-20210526212238-510955                              |                                 |         |         |                               |                               |
	|         | -- exec                                                      |                                 |         |         |                               |                               |
	|         | busybox-6cd5ff77cb-dlslt --                                  |                                 |         |         |                               |                               |
	|         | nslookup kubernetes.default                                  |                                 |         |         |                               |                               |
	| kubectl | -p multinode-20210526212238-510955                           | multinode-20210526212238-510955 | jenkins | v1.20.0 | Wed, 26 May 2021 21:26:23 UTC | Wed, 26 May 2021 21:26:23 UTC |
	|         | -- exec busybox-6cd5ff77cb-4g265                             |                                 |         |         |                               |                               |
	|         | -- nslookup                                                  |                                 |         |         |                               |                               |
	|         | kubernetes.default.svc.cluster.local                         |                                 |         |         |                               |                               |
	| kubectl | -p multinode-20210526212238-510955                           | multinode-20210526212238-510955 | jenkins | v1.20.0 | Wed, 26 May 2021 21:26:23 UTC | Wed, 26 May 2021 21:26:23 UTC |
	|         | -- exec busybox-6cd5ff77cb-dlslt                             |                                 |         |         |                               |                               |
	|         | -- nslookup                                                  |                                 |         |         |                               |                               |
	|         | kubernetes.default.svc.cluster.local                         |                                 |         |         |                               |                               |
	| kubectl | -p multinode-20210526212238-510955                           | multinode-20210526212238-510955 | jenkins | v1.20.0 | Wed, 26 May 2021 21:26:23 UTC | Wed, 26 May 2021 21:26:23 UTC |
	|         | -- get pods -o                                               |                                 |         |         |                               |                               |
	|         | jsonpath='{.items[*].metadata.name}'                         |                                 |         |         |                               |                               |
	| kubectl | -p                                                           | multinode-20210526212238-510955 | jenkins | v1.20.0 | Wed, 26 May 2021 21:26:23 UTC | Wed, 26 May 2021 21:26:23 UTC |
	|         | multinode-20210526212238-510955                              |                                 |         |         |                               |                               |
	|         | -- exec                                                      |                                 |         |         |                               |                               |
	|         | busybox-6cd5ff77cb-4g265                                     |                                 |         |         |                               |                               |
	|         | -- sh -c nslookup                                            |                                 |         |         |                               |                               |
	|         | host.minikube.internal | awk                                 |                                 |         |         |                               |                               |
	|         | 'NR==5' | cut -d' ' -f3                                      |                                 |         |         |                               |                               |
	| ssh     | -p                                                           | multinode-20210526212238-510955 | jenkins | v1.20.0 | Wed, 26 May 2021 21:26:23 UTC | Wed, 26 May 2021 21:26:24 UTC |
	|         | multinode-20210526212238-510955                              |                                 |         |         |                               |                               |
	|         | ip -4 -br -o a s eth0 | tr -s '                              |                                 |         |         |                               |                               |
	|         | ' | cut -d' ' -f3                                            |                                 |         |         |                               |                               |
	| kubectl | -p                                                           | multinode-20210526212238-510955 | jenkins | v1.20.0 | Wed, 26 May 2021 21:26:24 UTC | Wed, 26 May 2021 21:26:24 UTC |
	|         | multinode-20210526212238-510955                              |                                 |         |         |                               |                               |
	|         | -- exec                                                      |                                 |         |         |                               |                               |
	|         | busybox-6cd5ff77cb-dlslt                                     |                                 |         |         |                               |                               |
	|         | -- sh -c nslookup                                            |                                 |         |         |                               |                               |
	|         | host.minikube.internal | awk                                 |                                 |         |         |                               |                               |
	|         | 'NR==5' | cut -d' ' -f3                                      |                                 |         |         |                               |                               |
	| ssh     | -p                                                           | multinode-20210526212238-510955 | jenkins | v1.20.0 | Wed, 26 May 2021 21:26:24 UTC | Wed, 26 May 2021 21:26:24 UTC |
	|         | multinode-20210526212238-510955                              |                                 |         |         |                               |                               |
	|         | ip -4 -br -o a s eth0 | tr -s '                              |                                 |         |         |                               |                               |
	|         | ' | cut -d' ' -f3                                            |                                 |         |         |                               |                               |
	| node    | add -p                                                       | multinode-20210526212238-510955 | jenkins | v1.20.0 | Wed, 26 May 2021 21:26:24 UTC | Wed, 26 May 2021 21:27:25 UTC |
	|         | multinode-20210526212238-510955                              |                                 |         |         |                               |                               |
	|         | -v 3 --alsologtostderr                                       |                                 |         |         |                               |                               |
	| profile | list --output json                                           | minikube                        | jenkins | v1.20.0 | Wed, 26 May 2021 21:27:26 UTC | Wed, 26 May 2021 21:27:26 UTC |
	| -p      | multinode-20210526212238-510955                              | multinode-20210526212238-510955 | jenkins | v1.20.0 | Wed, 26 May 2021 21:27:27 UTC | Wed, 26 May 2021 21:27:27 UTC |
	|         | cp testdata/cp-test.txt                                      |                                 |         |         |                               |                               |
	|         | /home/docker/cp-test.txt                                     |                                 |         |         |                               |                               |
	| -p      | multinode-20210526212238-510955                              | multinode-20210526212238-510955 | jenkins | v1.20.0 | Wed, 26 May 2021 21:27:27 UTC | Wed, 26 May 2021 21:27:27 UTC |
	|         | ssh sudo cat                                                 |                                 |         |         |                               |                               |
	|         | /home/docker/cp-test.txt                                     |                                 |         |         |                               |                               |
	| -p      | multinode-20210526212238-510955 cp testdata/cp-test.txt      | multinode-20210526212238-510955 | jenkins | v1.20.0 | Wed, 26 May 2021 21:27:27 UTC | Wed, 26 May 2021 21:27:27 UTC |
	|         | multinode-20210526212238-510955-m02:/home/docker/cp-test.txt |                                 |         |         |                               |                               |
	| -p      | multinode-20210526212238-510955                              | multinode-20210526212238-510955 | jenkins | v1.20.0 | Wed, 26 May 2021 21:27:27 UTC | Wed, 26 May 2021 21:27:27 UTC |
	|         | ssh -n                                                       |                                 |         |         |                               |                               |
	|         | multinode-20210526212238-510955-m02                          |                                 |         |         |                               |                               |
	|         | sudo cat /home/docker/cp-test.txt                            |                                 |         |         |                               |                               |
	| -p      | multinode-20210526212238-510955 cp testdata/cp-test.txt      | multinode-20210526212238-510955 | jenkins | v1.20.0 | Wed, 26 May 2021 21:27:28 UTC | Wed, 26 May 2021 21:27:28 UTC |
	|         | multinode-20210526212238-510955-m03:/home/docker/cp-test.txt |                                 |         |         |                               |                               |
	| -p      | multinode-20210526212238-510955                              | multinode-20210526212238-510955 | jenkins | v1.20.0 | Wed, 26 May 2021 21:27:28 UTC | Wed, 26 May 2021 21:27:28 UTC |
	|         | ssh -n                                                       |                                 |         |         |                               |                               |
	|         | multinode-20210526212238-510955-m03                          |                                 |         |         |                               |                               |
	|         | sudo cat /home/docker/cp-test.txt                            |                                 |         |         |                               |                               |
	| -p      | multinode-20210526212238-510955                              | multinode-20210526212238-510955 | jenkins | v1.20.0 | Wed, 26 May 2021 21:27:28 UTC | Wed, 26 May 2021 21:28:28 UTC |
	|         | node stop m03                                                |                                 |         |         |                               |                               |
	| -p      | multinode-20210526212238-510955                              | multinode-20210526212238-510955 | jenkins | v1.20.0 | Wed, 26 May 2021 21:28:47 UTC | Wed, 26 May 2021 21:28:50 UTC |
	|         | logs -n 25                                                   |                                 |         |         |                               |                               |
	|---------|--------------------------------------------------------------|---------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/05/26 21:22:38
	Running on machine: debian-jenkins-agent-4
	Binary: Built with gc go1.16.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0526 21:22:38.756182  527485 out.go:291] Setting OutFile to fd 1 ...
	I0526 21:22:38.756246  527485 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0526 21:22:38.756249  527485 out.go:304] Setting ErrFile to fd 2...
	I0526 21:22:38.756252  527485 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0526 21:22:38.756343  527485 root.go:316] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/bin
	I0526 21:22:38.756577  527485 out.go:298] Setting JSON to false
	I0526 21:22:38.791255  527485 start.go:110] hostinfo: {"hostname":"debian-jenkins-agent-4","uptime":18321,"bootTime":1622045838,"procs":142,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-15-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0526 21:22:38.791346  527485 start.go:120] virtualization: kvm guest
	I0526 21:22:38.793833  527485 out.go:170] * [multinode-20210526212238-510955] minikube v1.20.0 on Debian 9.13 (kvm/amd64)
	I0526 21:22:38.793948  527485 notify.go:169] Checking for updates...
	I0526 21:22:38.795567  527485 out.go:170]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/kubeconfig
	I0526 21:22:38.797007  527485 out.go:170]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0526 21:22:38.798452  527485 out.go:170]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube
	I0526 21:22:38.799854  527485 out.go:170]   - MINIKUBE_LOCATION=11504
	I0526 21:22:38.800033  527485 driver.go:331] Setting default libvirt URI to qemu:///system
	I0526 21:22:38.828260  527485 out.go:170] * Using the kvm2 driver based on user configuration
	I0526 21:22:38.828278  527485 start.go:278] selected driver: kvm2
	I0526 21:22:38.828283  527485 start.go:751] validating driver "kvm2" against <nil>
	I0526 21:22:38.828296  527485 start.go:762] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0526 21:22:38.828759  527485 install.go:51] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0526 21:22:38.828916  527485 install.go:116] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0526 21:22:38.839336  527485 install.go:136] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.20.0
	I0526 21:22:38.839382  527485 start_flags.go:259] no existing cluster config was found, will generate one from the flags 
	I0526 21:22:38.839510  527485 start_flags.go:656] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0526 21:22:38.839530  527485 cni.go:93] Creating CNI manager for ""
	I0526 21:22:38.839535  527485 cni.go:154] 0 nodes found, recommending kindnet
	I0526 21:22:38.839541  527485 cni.go:217] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0526 21:22:38.839547  527485 cni.go:222] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0526 21:22:38.839552  527485 start_flags.go:268] Found "CNI" CNI - setting NetworkPlugin=cni
	I0526 21:22:38.839560  527485 start_flags.go:273] config:
	{Name:multinode-20210526212238-510955 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.22-1620785771-11384@sha256:f5844fe35994179bbad8dda27d4912304a2fedccdf0bf93ce8b2ec2b3b83af1c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:multinode-20210526212238-510955 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:true}
	I0526 21:22:38.839645  527485 iso.go:123] acquiring lock: {Name:mkae6243686e006cb5174618a31875b12ffbed81 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0526 21:22:38.841622  527485 out.go:170] * Starting control plane node multinode-20210526212238-510955 in cluster multinode-20210526212238-510955
	I0526 21:22:38.841666  527485 preload.go:98] Checking if preload exists for k8s version v1.20.2 and runtime containerd
	I0526 21:22:38.841712  527485 preload.go:106] Found local preload: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.20.2-containerd-overlay2-amd64.tar.lz4
	I0526 21:22:38.841731  527485 cache.go:54] Caching tarball of preloaded images
	I0526 21:22:38.841861  527485 preload.go:143] Found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.20.2-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0526 21:22:38.841878  527485 cache.go:57] Finished verifying existence of preloaded tar for  v1.20.2 on containerd
	I0526 21:22:38.842834  527485 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/multinode-20210526212238-510955/config.json ...
	I0526 21:22:38.842875  527485 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/multinode-20210526212238-510955/config.json: {Name:mk78eec809dd8a578b82c2b088249ee76deae305 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0526 21:22:38.843042  527485 cache.go:191] Successfully downloaded all kic artifacts
	I0526 21:22:38.843079  527485 start.go:313] acquiring machines lock for multinode-20210526212238-510955: {Name:mk9b6c43d31e9eaa4b66476ed1274ba5b188c66b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0526 21:22:38.843149  527485 start.go:317] acquired machines lock for "multinode-20210526212238-510955" in 50.564µs
	I0526 21:22:38.843177  527485 start.go:89] Provisioning new machine with config: &{Name:multinode-20210526212238-510955 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.20.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.22-1620785771-11384@sha256:f5844fe35994179bbad8dda27d4912304a2fedccdf0bf93ce8b2ec2b3b83af1c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 Cluste
rName:multinode-20210526212238-510955 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:true} &{Name: IP: Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}
	I0526 21:22:38.843250  527485 start.go:126] createHost starting for "" (driver="kvm2")
	I0526 21:22:38.845057  527485 out.go:197] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0526 21:22:38.845179  527485 main.go:128] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0526 21:22:38.845238  527485 main.go:128] libmachine: Launching plugin server for driver kvm2
	I0526 21:22:38.855304  527485 main.go:128] libmachine: Plugin server listening at address 127.0.0.1:39059
	I0526 21:22:38.855713  527485 main.go:128] libmachine: () Calling .GetVersion
	I0526 21:22:38.856166  527485 main.go:128] libmachine: Using API Version  1
	I0526 21:22:38.856194  527485 main.go:128] libmachine: () Calling .SetConfigRaw
	I0526 21:22:38.856570  527485 main.go:128] libmachine: () Calling .GetMachineName
	I0526 21:22:38.856762  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetMachineName
	I0526 21:22:38.856925  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .DriverName
	I0526 21:22:38.857068  527485 start.go:160] libmachine.API.Create for "multinode-20210526212238-510955" (driver="kvm2")
	I0526 21:22:38.857096  527485 client.go:168] LocalClient.Create starting
	I0526 21:22:38.857130  527485 main.go:128] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/ca.pem
	I0526 21:22:38.857163  527485 main.go:128] libmachine: Decoding PEM data...
	I0526 21:22:38.857177  527485 main.go:128] libmachine: Parsing certificate...
	I0526 21:22:38.857308  527485 main.go:128] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/cert.pem
	I0526 21:22:38.857334  527485 main.go:128] libmachine: Decoding PEM data...
	I0526 21:22:38.857358  527485 main.go:128] libmachine: Parsing certificate...
	I0526 21:22:38.857413  527485 main.go:128] libmachine: Running pre-create checks...
	I0526 21:22:38.857427  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .PreCreateCheck
	I0526 21:22:38.857733  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetConfigRaw
	I0526 21:22:38.858132  527485 main.go:128] libmachine: Creating machine...
	I0526 21:22:38.858149  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .Create
	I0526 21:22:38.858272  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Creating KVM machine...
	I0526 21:22:38.860672  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | found existing default KVM network
	I0526 21:22:38.861463  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | I0526 21:22:38.861317  527509 network.go:263] reserving subnet 192.168.39.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.39.0:0xc0000965e8] misses:0}
	I0526 21:22:38.861499  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | I0526 21:22:38.861408  527509 network.go:210] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0526 21:22:38.895502  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | trying to create private KVM network mk-multinode-20210526212238-510955 192.168.39.0/24...
	I0526 21:22:39.133237  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | private KVM network mk-multinode-20210526212238-510955 192.168.39.0/24 created
	I0526 21:22:39.133270  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | I0526 21:22:39.133210  527509 common.go:101] Making disk image using store path: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube
	I0526 21:22:39.133284  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Setting up store path in /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/multinode-20210526212238-510955 ...
	I0526 21:22:39.133324  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Building disk image from file:///home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/iso/minikube-v1.20.0.iso
	I0526 21:22:39.133346  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Downloading /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/iso/minikube-v1.20.0.iso...
	I0526 21:22:39.318215  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | I0526 21:22:39.318063  527509 common.go:108] Creating ssh key: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/multinode-20210526212238-510955/id_rsa...
	I0526 21:22:39.382875  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | I0526 21:22:39.382772  527509 common.go:114] Creating raw disk image: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/multinode-20210526212238-510955/multinode-20210526212238-510955.rawdisk...
	I0526 21:22:39.382907  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | Writing magic tar header
	I0526 21:22:39.382921  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | Writing SSH key tar header
	I0526 21:22:39.382932  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | I0526 21:22:39.382878  527509 common.go:128] Fixing permissions on /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/multinode-20210526212238-510955 ...
	I0526 21:22:39.383065  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/multinode-20210526212238-510955
	I0526 21:22:39.383099  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/multinode-20210526212238-510955 (perms=drwx------)
	I0526 21:22:39.383117  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines
	I0526 21:22:39.383143  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube
	I0526 21:22:39.383159  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1
	I0526 21:22:39.383176  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0526 21:22:39.383193  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | Checking permissions on dir: /home/jenkins
	I0526 21:22:39.383212  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines (perms=drwxr-xr-x)
	I0526 21:22:39.383234  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube (perms=drwxr-xr-x)
	I0526 21:22:39.383250  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1 (perms=drwxr-xr-x)
	I0526 21:22:39.383264  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxr-xr-x)
	I0526 21:22:39.383275  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0526 21:22:39.383285  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | Checking permissions on dir: /home
	I0526 21:22:39.383298  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Creating domain...
	I0526 21:22:39.383311  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | Skipping /home - not owner
	I0526 21:22:39.409550  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | domain multinode-20210526212238-510955 has defined MAC address 52:54:00:d9:59:0d in network default
	I0526 21:22:39.410061  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Ensuring networks are active...
	I0526 21:22:39.410089  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | domain multinode-20210526212238-510955 has defined MAC address 52:54:00:0c:8b:34 in network mk-multinode-20210526212238-510955
	I0526 21:22:39.411924  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Ensuring network default is active
	I0526 21:22:39.412209  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Ensuring network mk-multinode-20210526212238-510955 is active
	I0526 21:22:39.412686  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Getting domain xml...
	I0526 21:22:39.414362  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Creating domain...
	I0526 21:22:39.766721  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Waiting to get IP...
	I0526 21:22:39.767397  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | domain multinode-20210526212238-510955 has defined MAC address 52:54:00:0c:8b:34 in network mk-multinode-20210526212238-510955
	I0526 21:22:39.767893  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | unable to find current IP address of domain multinode-20210526212238-510955 in network mk-multinode-20210526212238-510955
	I0526 21:22:39.767927  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | I0526 21:22:39.767865  527509 retry.go:31] will retry after 263.082536ms: waiting for machine to come up
	I0526 21:22:40.032058  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | domain multinode-20210526212238-510955 has defined MAC address 52:54:00:0c:8b:34 in network mk-multinode-20210526212238-510955
	I0526 21:22:40.032436  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | unable to find current IP address of domain multinode-20210526212238-510955 in network mk-multinode-20210526212238-510955
	I0526 21:22:40.032462  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | I0526 21:22:40.032386  527509 retry.go:31] will retry after 381.329545ms: waiting for machine to come up
	I0526 21:22:40.414793  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | domain multinode-20210526212238-510955 has defined MAC address 52:54:00:0c:8b:34 in network mk-multinode-20210526212238-510955
	I0526 21:22:40.415240  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | unable to find current IP address of domain multinode-20210526212238-510955 in network mk-multinode-20210526212238-510955
	I0526 21:22:40.415278  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | I0526 21:22:40.415198  527509 retry.go:31] will retry after 422.765636ms: waiting for machine to come up
	I0526 21:22:40.839646  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | domain multinode-20210526212238-510955 has defined MAC address 52:54:00:0c:8b:34 in network mk-multinode-20210526212238-510955
	I0526 21:22:40.840058  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | unable to find current IP address of domain multinode-20210526212238-510955 in network mk-multinode-20210526212238-510955
	I0526 21:22:40.840094  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | I0526 21:22:40.840006  527509 retry.go:31] will retry after 473.074753ms: waiting for machine to come up
	I0526 21:22:41.314603  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | domain multinode-20210526212238-510955 has defined MAC address 52:54:00:0c:8b:34 in network mk-multinode-20210526212238-510955
	I0526 21:22:41.315042  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | unable to find current IP address of domain multinode-20210526212238-510955 in network mk-multinode-20210526212238-510955
	I0526 21:22:41.315075  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | I0526 21:22:41.314986  527509 retry.go:31] will retry after 587.352751ms: waiting for machine to come up
	I0526 21:22:41.903598  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | domain multinode-20210526212238-510955 has defined MAC address 52:54:00:0c:8b:34 in network mk-multinode-20210526212238-510955
	I0526 21:22:41.903947  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | unable to find current IP address of domain multinode-20210526212238-510955 in network mk-multinode-20210526212238-510955
	I0526 21:22:41.903973  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | I0526 21:22:41.903891  527509 retry.go:31] will retry after 834.206799ms: waiting for machine to come up
	I0526 21:22:42.739982  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | domain multinode-20210526212238-510955 has defined MAC address 52:54:00:0c:8b:34 in network mk-multinode-20210526212238-510955
	I0526 21:22:42.740267  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | unable to find current IP address of domain multinode-20210526212238-510955 in network mk-multinode-20210526212238-510955
	I0526 21:22:42.740303  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | I0526 21:22:42.740242  527509 retry.go:31] will retry after 746.553905ms: waiting for machine to come up
	I0526 21:22:43.488080  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | domain multinode-20210526212238-510955 has defined MAC address 52:54:00:0c:8b:34 in network mk-multinode-20210526212238-510955
	I0526 21:22:43.488537  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | unable to find current IP address of domain multinode-20210526212238-510955 in network mk-multinode-20210526212238-510955
	I0526 21:22:43.488564  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | I0526 21:22:43.488510  527509 retry.go:31] will retry after 987.362415ms: waiting for machine to come up
	I0526 21:22:44.477090  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | domain multinode-20210526212238-510955 has defined MAC address 52:54:00:0c:8b:34 in network mk-multinode-20210526212238-510955
	I0526 21:22:44.477458  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | unable to find current IP address of domain multinode-20210526212238-510955 in network mk-multinode-20210526212238-510955
	I0526 21:22:44.477489  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | I0526 21:22:44.477406  527509 retry.go:31] will retry after 1.189835008s: waiting for machine to come up
	I0526 21:22:45.668795  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | domain multinode-20210526212238-510955 has defined MAC address 52:54:00:0c:8b:34 in network mk-multinode-20210526212238-510955
	I0526 21:22:45.669147  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | unable to find current IP address of domain multinode-20210526212238-510955 in network mk-multinode-20210526212238-510955
	I0526 21:22:45.669182  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | I0526 21:22:45.669085  527509 retry.go:31] will retry after 1.677229867s: waiting for machine to come up
	I0526 21:22:47.348770  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | domain multinode-20210526212238-510955 has defined MAC address 52:54:00:0c:8b:34 in network mk-multinode-20210526212238-510955
	I0526 21:22:47.349176  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | unable to find current IP address of domain multinode-20210526212238-510955 in network mk-multinode-20210526212238-510955
	I0526 21:22:47.349207  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | I0526 21:22:47.349116  527509 retry.go:31] will retry after 2.346016261s: waiting for machine to come up
	I0526 21:22:49.696210  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | domain multinode-20210526212238-510955 has defined MAC address 52:54:00:0c:8b:34 in network mk-multinode-20210526212238-510955
	I0526 21:22:49.696624  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | unable to find current IP address of domain multinode-20210526212238-510955 in network mk-multinode-20210526212238-510955
	I0526 21:22:49.696659  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | I0526 21:22:49.696561  527509 retry.go:31] will retry after 3.36678925s: waiting for machine to come up
	I0526 21:22:53.067037  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | domain multinode-20210526212238-510955 has defined MAC address 52:54:00:0c:8b:34 in network mk-multinode-20210526212238-510955
	I0526 21:22:53.067462  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | unable to find current IP address of domain multinode-20210526212238-510955 in network mk-multinode-20210526212238-510955
	I0526 21:22:53.067498  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | I0526 21:22:53.067402  527509 retry.go:31] will retry after 3.11822781s: waiting for machine to come up
	I0526 21:22:56.188960  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | domain multinode-20210526212238-510955 has defined MAC address 52:54:00:0c:8b:34 in network mk-multinode-20210526212238-510955
	I0526 21:22:56.189444  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Found IP for machine: 192.168.39.229
	I0526 21:22:56.189475  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | domain multinode-20210526212238-510955 has current primary IP address 192.168.39.229 and MAC address 52:54:00:0c:8b:34 in network mk-multinode-20210526212238-510955
	I0526 21:22:56.189486  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Reserving static IP address...
	I0526 21:22:56.189744  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | unable to find host DHCP lease matching {name: "multinode-20210526212238-510955", mac: "52:54:00:0c:8b:34", ip: "192.168.39.229"} in network mk-multinode-20210526212238-510955
	I0526 21:22:56.237513  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Reserved static IP address: 192.168.39.229
	I0526 21:22:56.237543  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Waiting for SSH to be available...
	I0526 21:22:56.237554  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | Getting to WaitForSSH function...
	I0526 21:22:56.242739  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | domain multinode-20210526212238-510955 has defined MAC address 52:54:00:0c:8b:34 in network mk-multinode-20210526212238-510955
	I0526 21:22:56.243048  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:8b:34", ip: ""} in network mk-multinode-20210526212238-510955: {Iface:virbr1 ExpiryTime:2021-05-26 22:22:53 +0000 UTC Type:0 Mac:52:54:00:0c:8b:34 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:minikube Clientid:01:52:54:00:0c:8b:34}
	I0526 21:22:56.243083  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | domain multinode-20210526212238-510955 has defined IP address 192.168.39.229 and MAC address 52:54:00:0c:8b:34 in network mk-multinode-20210526212238-510955
	I0526 21:22:56.243167  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | Using SSH client type: external
	I0526 21:22:56.243197  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | Using SSH private key: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/multinode-20210526212238-510955/id_rsa (-rw-------)
	I0526 21:22:56.243239  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.229 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/multinode-20210526212238-510955/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0526 21:22:56.243273  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | About to run SSH command:
	I0526 21:22:56.243284  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | exit 0
	I0526 21:22:56.376672  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | SSH cmd err, output: <nil>: 
	I0526 21:22:56.377111  527485 main.go:128] libmachine: (multinode-20210526212238-510955) KVM machine creation complete!
	I0526 21:22:56.377180  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetConfigRaw
	I0526 21:22:56.377707  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .DriverName
	I0526 21:22:56.377887  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .DriverName
	I0526 21:22:56.378034  527485 main.go:128] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0526 21:22:56.378052  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetState
	I0526 21:22:56.380329  527485 main.go:128] libmachine: Detecting operating system of created instance...
	I0526 21:22:56.380348  527485 main.go:128] libmachine: Waiting for SSH to be available...
	I0526 21:22:56.380357  527485 main.go:128] libmachine: Getting to WaitForSSH function...
	I0526 21:22:56.380367  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetSSHHostname
	I0526 21:22:56.384748  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | domain multinode-20210526212238-510955 has defined MAC address 52:54:00:0c:8b:34 in network mk-multinode-20210526212238-510955
	I0526 21:22:56.385102  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:8b:34", ip: ""} in network mk-multinode-20210526212238-510955: {Iface:virbr1 ExpiryTime:2021-05-26 22:22:53 +0000 UTC Type:0 Mac:52:54:00:0c:8b:34 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:multinode-20210526212238-510955 Clientid:01:52:54:00:0c:8b:34}
	I0526 21:22:56.385137  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | domain multinode-20210526212238-510955 has defined IP address 192.168.39.229 and MAC address 52:54:00:0c:8b:34 in network mk-multinode-20210526212238-510955
	I0526 21:22:56.385197  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetSSHPort
	I0526 21:22:56.385400  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetSSHKeyPath
	I0526 21:22:56.385559  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetSSHKeyPath
	I0526 21:22:56.385683  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetSSHUsername
	I0526 21:22:56.385794  527485 main.go:128] libmachine: Using SSH client type: native
	I0526 21:22:56.385998  527485 main.go:128] libmachine: &{{{<nil> 0 [] [] []} docker [0x802c00] 0x802bc0 <nil>  [] 0s} 192.168.39.229 22 <nil> <nil>}
	I0526 21:22:56.386014  527485 main.go:128] libmachine: About to run SSH command:
	exit 0
	I0526 21:22:56.503895  527485 main.go:128] libmachine: SSH cmd err, output: <nil>: 
	I0526 21:22:56.503914  527485 main.go:128] libmachine: Detecting the provisioner...
	I0526 21:22:56.503922  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetSSHHostname
	I0526 21:22:56.508753  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | domain multinode-20210526212238-510955 has defined MAC address 52:54:00:0c:8b:34 in network mk-multinode-20210526212238-510955
	I0526 21:22:56.509063  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:8b:34", ip: ""} in network mk-multinode-20210526212238-510955: {Iface:virbr1 ExpiryTime:2021-05-26 22:22:53 +0000 UTC Type:0 Mac:52:54:00:0c:8b:34 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:multinode-20210526212238-510955 Clientid:01:52:54:00:0c:8b:34}
	I0526 21:22:56.509091  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | domain multinode-20210526212238-510955 has defined IP address 192.168.39.229 and MAC address 52:54:00:0c:8b:34 in network mk-multinode-20210526212238-510955
	I0526 21:22:56.509237  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetSSHPort
	I0526 21:22:56.509419  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetSSHKeyPath
	I0526 21:22:56.509570  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetSSHKeyPath
	I0526 21:22:56.509670  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetSSHUsername
	I0526 21:22:56.509797  527485 main.go:128] libmachine: Using SSH client type: native
	I0526 21:22:56.509953  527485 main.go:128] libmachine: &{{{<nil> 0 [] [] []} docker [0x802c00] 0x802bc0 <nil>  [] 0s} 192.168.39.229 22 <nil> <nil>}
	I0526 21:22:56.509972  527485 main.go:128] libmachine: About to run SSH command:
	cat /etc/os-release
	I0526 21:22:56.626000  527485 main.go:128] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2020.02.12
	ID=buildroot
	VERSION_ID=2020.02.12
	PRETTY_NAME="Buildroot 2020.02.12"
	
	I0526 21:22:56.626062  527485 main.go:128] libmachine: found compatible host: buildroot
	I0526 21:22:56.626078  527485 main.go:128] libmachine: Provisioning with buildroot...
	I0526 21:22:56.626088  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetMachineName
	I0526 21:22:56.626246  527485 buildroot.go:166] provisioning hostname "multinode-20210526212238-510955"
	I0526 21:22:56.626274  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetMachineName
	I0526 21:22:56.626456  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetSSHHostname
	I0526 21:22:56.630680  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | domain multinode-20210526212238-510955 has defined MAC address 52:54:00:0c:8b:34 in network mk-multinode-20210526212238-510955
	I0526 21:22:56.630962  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:8b:34", ip: ""} in network mk-multinode-20210526212238-510955: {Iface:virbr1 ExpiryTime:2021-05-26 22:22:53 +0000 UTC Type:0 Mac:52:54:00:0c:8b:34 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:multinode-20210526212238-510955 Clientid:01:52:54:00:0c:8b:34}
	I0526 21:22:56.630991  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | domain multinode-20210526212238-510955 has defined IP address 192.168.39.229 and MAC address 52:54:00:0c:8b:34 in network mk-multinode-20210526212238-510955
	I0526 21:22:56.631098  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetSSHPort
	I0526 21:22:56.631280  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetSSHKeyPath
	I0526 21:22:56.631439  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetSSHKeyPath
	I0526 21:22:56.631564  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetSSHUsername
	I0526 21:22:56.631708  527485 main.go:128] libmachine: Using SSH client type: native
	I0526 21:22:56.631859  527485 main.go:128] libmachine: &{{{<nil> 0 [] [] []} docker [0x802c00] 0x802bc0 <nil>  [] 0s} 192.168.39.229 22 <nil> <nil>}
	I0526 21:22:56.631875  527485 main.go:128] libmachine: About to run SSH command:
	sudo hostname multinode-20210526212238-510955 && echo "multinode-20210526212238-510955" | sudo tee /etc/hostname
	I0526 21:22:56.756884  527485 main.go:128] libmachine: SSH cmd err, output: <nil>: multinode-20210526212238-510955
	
	I0526 21:22:56.756910  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetSSHHostname
	I0526 21:22:56.761538  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | domain multinode-20210526212238-510955 has defined MAC address 52:54:00:0c:8b:34 in network mk-multinode-20210526212238-510955
	I0526 21:22:56.761862  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:8b:34", ip: ""} in network mk-multinode-20210526212238-510955: {Iface:virbr1 ExpiryTime:2021-05-26 22:22:53 +0000 UTC Type:0 Mac:52:54:00:0c:8b:34 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:multinode-20210526212238-510955 Clientid:01:52:54:00:0c:8b:34}
	I0526 21:22:56.761885  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | domain multinode-20210526212238-510955 has defined IP address 192.168.39.229 and MAC address 52:54:00:0c:8b:34 in network mk-multinode-20210526212238-510955
	I0526 21:22:56.762049  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetSSHPort
	I0526 21:22:56.762210  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetSSHKeyPath
	I0526 21:22:56.762353  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetSSHKeyPath
	I0526 21:22:56.762480  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetSSHUsername
	I0526 21:22:56.762641  527485 main.go:128] libmachine: Using SSH client type: native
	I0526 21:22:56.762804  527485 main.go:128] libmachine: &{{{<nil> 0 [] [] []} docker [0x802c00] 0x802bc0 <nil>  [] 0s} 192.168.39.229 22 <nil> <nil>}
	I0526 21:22:56.762834  527485 main.go:128] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-20210526212238-510955' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-20210526212238-510955/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-20210526212238-510955' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0526 21:22:56.883834  527485 main.go:128] libmachine: SSH cmd err, output: <nil>: 
	I0526 21:22:56.883870  527485 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikub
e/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube}
	I0526 21:22:56.883898  527485 buildroot.go:174] setting up certificates
	I0526 21:22:56.883908  527485 provision.go:83] configureAuth start
	I0526 21:22:56.883920  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetMachineName
	I0526 21:22:56.884105  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetIP
	I0526 21:22:56.888712  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | domain multinode-20210526212238-510955 has defined MAC address 52:54:00:0c:8b:34 in network mk-multinode-20210526212238-510955
	I0526 21:22:56.889022  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:8b:34", ip: ""} in network mk-multinode-20210526212238-510955: {Iface:virbr1 ExpiryTime:2021-05-26 22:22:53 +0000 UTC Type:0 Mac:52:54:00:0c:8b:34 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:multinode-20210526212238-510955 Clientid:01:52:54:00:0c:8b:34}
	I0526 21:22:56.889065  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | domain multinode-20210526212238-510955 has defined IP address 192.168.39.229 and MAC address 52:54:00:0c:8b:34 in network mk-multinode-20210526212238-510955
	I0526 21:22:56.889177  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetSSHHostname
	I0526 21:22:56.893241  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | domain multinode-20210526212238-510955 has defined MAC address 52:54:00:0c:8b:34 in network mk-multinode-20210526212238-510955
	I0526 21:22:56.893569  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:8b:34", ip: ""} in network mk-multinode-20210526212238-510955: {Iface:virbr1 ExpiryTime:2021-05-26 22:22:53 +0000 UTC Type:0 Mac:52:54:00:0c:8b:34 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:multinode-20210526212238-510955 Clientid:01:52:54:00:0c:8b:34}
	I0526 21:22:56.893605  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | domain multinode-20210526212238-510955 has defined IP address 192.168.39.229 and MAC address 52:54:00:0c:8b:34 in network mk-multinode-20210526212238-510955
	I0526 21:22:56.893664  527485 provision.go:137] copyHostCerts
	I0526 21:22:56.893690  527485 vm_assets.go:98] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/ca.pem
	I0526 21:22:56.893734  527485 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/ca.pem, removing ...
	I0526 21:22:56.893749  527485 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/ca.pem
	I0526 21:22:56.893806  527485 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/ca.pem (1078 bytes)
	I0526 21:22:56.893908  527485 vm_assets.go:98] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cert.pem
	I0526 21:22:56.893940  527485 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cert.pem, removing ...
	I0526 21:22:56.893948  527485 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cert.pem
	I0526 21:22:56.893980  527485 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cert.pem (1123 bytes)
	I0526 21:22:56.894036  527485 vm_assets.go:98] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/key.pem
	I0526 21:22:56.894063  527485 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/key.pem, removing ...
	I0526 21:22:56.894074  527485 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/key.pem
	I0526 21:22:56.894104  527485 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/key.pem (1679 bytes)
	I0526 21:22:56.894160  527485 provision.go:111] generating server cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/ca-key.pem org=jenkins.multinode-20210526212238-510955 san=[192.168.39.229 192.168.39.229 localhost 127.0.0.1 minikube multinode-20210526212238-510955]
	I0526 21:22:57.293529  527485 provision.go:171] copyRemoteCerts
	I0526 21:22:57.293605  527485 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0526 21:22:57.293638  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetSSHHostname
	I0526 21:22:57.298962  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | domain multinode-20210526212238-510955 has defined MAC address 52:54:00:0c:8b:34 in network mk-multinode-20210526212238-510955
	I0526 21:22:57.299286  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:8b:34", ip: ""} in network mk-multinode-20210526212238-510955: {Iface:virbr1 ExpiryTime:2021-05-26 22:22:53 +0000 UTC Type:0 Mac:52:54:00:0c:8b:34 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:multinode-20210526212238-510955 Clientid:01:52:54:00:0c:8b:34}
	I0526 21:22:57.299319  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | domain multinode-20210526212238-510955 has defined IP address 192.168.39.229 and MAC address 52:54:00:0c:8b:34 in network mk-multinode-20210526212238-510955
	I0526 21:22:57.299485  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetSSHPort
	I0526 21:22:57.299697  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetSSHKeyPath
	I0526 21:22:57.299864  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetSSHUsername
	I0526 21:22:57.299966  527485 sshutil.go:53] new ssh client: &{IP:192.168.39.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/multinode-20210526212238-510955/id_rsa Username:docker}
	I0526 21:22:57.383753  527485 vm_assets.go:98] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0526 21:22:57.383820  527485 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0526 21:22:57.400655  527485 vm_assets.go:98] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0526 21:22:57.400704  527485 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes)
	I0526 21:22:57.417361  527485 vm_assets.go:98] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0526 21:22:57.417400  527485 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0526 21:22:57.433924  527485 provision.go:86] duration metric: configureAuth took 550.003144ms
	I0526 21:22:57.433943  527485 buildroot.go:189] setting minikube options for container-runtime
	I0526 21:22:57.434087  527485 main.go:128] libmachine: Checking connection to Docker...
	I0526 21:22:57.434102  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetURL
	I0526 21:22:57.436663  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | Using libvirt version 3000000
	I0526 21:22:57.441125  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | domain multinode-20210526212238-510955 has defined MAC address 52:54:00:0c:8b:34 in network mk-multinode-20210526212238-510955
	I0526 21:22:57.441437  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:8b:34", ip: ""} in network mk-multinode-20210526212238-510955: {Iface:virbr1 ExpiryTime:2021-05-26 22:22:53 +0000 UTC Type:0 Mac:52:54:00:0c:8b:34 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:multinode-20210526212238-510955 Clientid:01:52:54:00:0c:8b:34}
	I0526 21:22:57.441474  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | domain multinode-20210526212238-510955 has defined IP address 192.168.39.229 and MAC address 52:54:00:0c:8b:34 in network mk-multinode-20210526212238-510955
	I0526 21:22:57.441576  527485 main.go:128] libmachine: Docker is up and running!
	I0526 21:22:57.441595  527485 main.go:128] libmachine: Reticulating splines...
	I0526 21:22:57.441603  527485 client.go:171] LocalClient.Create took 18.584500055s
	I0526 21:22:57.441621  527485 start.go:168] duration metric: libmachine.API.Create for "multinode-20210526212238-510955" took 18.584554789s
	I0526 21:22:57.441647  527485 start.go:267] post-start starting for "multinode-20210526212238-510955" (driver="kvm2")
	I0526 21:22:57.441652  527485 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0526 21:22:57.441664  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .DriverName
	I0526 21:22:57.441876  527485 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0526 21:22:57.441900  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetSSHHostname
	I0526 21:22:57.445895  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | domain multinode-20210526212238-510955 has defined MAC address 52:54:00:0c:8b:34 in network mk-multinode-20210526212238-510955
	I0526 21:22:57.446135  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:8b:34", ip: ""} in network mk-multinode-20210526212238-510955: {Iface:virbr1 ExpiryTime:2021-05-26 22:22:53 +0000 UTC Type:0 Mac:52:54:00:0c:8b:34 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:multinode-20210526212238-510955 Clientid:01:52:54:00:0c:8b:34}
	I0526 21:22:57.446157  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | domain multinode-20210526212238-510955 has defined IP address 192.168.39.229 and MAC address 52:54:00:0c:8b:34 in network mk-multinode-20210526212238-510955
	I0526 21:22:57.446277  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetSSHPort
	I0526 21:22:57.446442  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetSSHKeyPath
	I0526 21:22:57.446598  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetSSHUsername
	I0526 21:22:57.446750  527485 sshutil.go:53] new ssh client: &{IP:192.168.39.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/multinode-20210526212238-510955/id_rsa Username:docker}
	I0526 21:22:57.531400  527485 ssh_runner.go:149] Run: cat /etc/os-release
	I0526 21:22:57.535466  527485 command_runner.go:124] > NAME=Buildroot
	I0526 21:22:57.535481  527485 command_runner.go:124] > VERSION=2020.02.12
	I0526 21:22:57.535485  527485 command_runner.go:124] > ID=buildroot
	I0526 21:22:57.535490  527485 command_runner.go:124] > VERSION_ID=2020.02.12
	I0526 21:22:57.535495  527485 command_runner.go:124] > PRETTY_NAME="Buildroot 2020.02.12"
	I0526 21:22:57.535526  527485 info.go:137] Remote host: Buildroot 2020.02.12
	I0526 21:22:57.535553  527485 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/addons for local assets ...
	I0526 21:22:57.535596  527485 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/files for local assets ...
	I0526 21:22:57.535738  527485 start.go:270] post-start completed in 94.085921ms
	I0526 21:22:57.535784  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetConfigRaw
	I0526 21:22:57.536236  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetIP
	I0526 21:22:57.540471  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | domain multinode-20210526212238-510955 has defined MAC address 52:54:00:0c:8b:34 in network mk-multinode-20210526212238-510955
	I0526 21:22:57.540741  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:8b:34", ip: ""} in network mk-multinode-20210526212238-510955: {Iface:virbr1 ExpiryTime:2021-05-26 22:22:53 +0000 UTC Type:0 Mac:52:54:00:0c:8b:34 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:multinode-20210526212238-510955 Clientid:01:52:54:00:0c:8b:34}
	I0526 21:22:57.540771  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | domain multinode-20210526212238-510955 has defined IP address 192.168.39.229 and MAC address 52:54:00:0c:8b:34 in network mk-multinode-20210526212238-510955
	I0526 21:22:57.541002  527485 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/multinode-20210526212238-510955/config.json ...
	I0526 21:22:57.541144  527485 start.go:129] duration metric: createHost completed in 18.697885597s
	I0526 21:22:57.541156  527485 start.go:80] releasing machines lock for "multinode-20210526212238-510955", held for 18.697995329s
	I0526 21:22:57.541186  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .DriverName
	I0526 21:22:57.541336  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetIP
	I0526 21:22:57.545449  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | domain multinode-20210526212238-510955 has defined MAC address 52:54:00:0c:8b:34 in network mk-multinode-20210526212238-510955
	I0526 21:22:57.545746  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:8b:34", ip: ""} in network mk-multinode-20210526212238-510955: {Iface:virbr1 ExpiryTime:2021-05-26 22:22:53 +0000 UTC Type:0 Mac:52:54:00:0c:8b:34 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:multinode-20210526212238-510955 Clientid:01:52:54:00:0c:8b:34}
	I0526 21:22:57.545767  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | domain multinode-20210526212238-510955 has defined IP address 192.168.39.229 and MAC address 52:54:00:0c:8b:34 in network mk-multinode-20210526212238-510955
	I0526 21:22:57.545902  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .DriverName
	I0526 21:22:57.546065  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .DriverName
	I0526 21:22:57.546504  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .DriverName
	I0526 21:22:57.546703  527485 ssh_runner.go:149] Run: systemctl --version
	I0526 21:22:57.546724  527485 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0526 21:22:57.546730  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetSSHHostname
	I0526 21:22:57.546752  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetSSHHostname
	I0526 21:22:57.553878  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | domain multinode-20210526212238-510955 has defined MAC address 52:54:00:0c:8b:34 in network mk-multinode-20210526212238-510955
	I0526 21:22:57.553987  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | domain multinode-20210526212238-510955 has defined MAC address 52:54:00:0c:8b:34 in network mk-multinode-20210526212238-510955
	I0526 21:22:57.554209  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:8b:34", ip: ""} in network mk-multinode-20210526212238-510955: {Iface:virbr1 ExpiryTime:2021-05-26 22:22:53 +0000 UTC Type:0 Mac:52:54:00:0c:8b:34 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:multinode-20210526212238-510955 Clientid:01:52:54:00:0c:8b:34}
	I0526 21:22:57.554238  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | domain multinode-20210526212238-510955 has defined IP address 192.168.39.229 and MAC address 52:54:00:0c:8b:34 in network mk-multinode-20210526212238-510955
	I0526 21:22:57.554268  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:8b:34", ip: ""} in network mk-multinode-20210526212238-510955: {Iface:virbr1 ExpiryTime:2021-05-26 22:22:53 +0000 UTC Type:0 Mac:52:54:00:0c:8b:34 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:multinode-20210526212238-510955 Clientid:01:52:54:00:0c:8b:34}
	I0526 21:22:57.554288  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | domain multinode-20210526212238-510955 has defined IP address 192.168.39.229 and MAC address 52:54:00:0c:8b:34 in network mk-multinode-20210526212238-510955
	I0526 21:22:57.554345  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetSSHPort
	I0526 21:22:57.554500  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetSSHKeyPath
	I0526 21:22:57.554502  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetSSHPort
	I0526 21:22:57.554655  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetSSHKeyPath
	I0526 21:22:57.554656  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetSSHUsername
	I0526 21:22:57.554817  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetSSHUsername
	I0526 21:22:57.554839  527485 sshutil.go:53] new ssh client: &{IP:192.168.39.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/multinode-20210526212238-510955/id_rsa Username:docker}
	I0526 21:22:57.554926  527485 sshutil.go:53] new ssh client: &{IP:192.168.39.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/multinode-20210526212238-510955/id_rsa Username:docker}
	I0526 21:22:57.638446  527485 command_runner.go:124] > systemd 244 (244)
	I0526 21:22:57.638487  527485 command_runner.go:124] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK +SYSVINIT +UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I0526 21:22:57.638513  527485 preload.go:98] Checking if preload exists for k8s version v1.20.2 and runtime containerd
	I0526 21:22:57.638549  527485 preload.go:106] Found local preload: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.20.2-containerd-overlay2-amd64.tar.lz4
	I0526 21:22:57.638599  527485 ssh_runner.go:149] Run: sudo crictl images --output json
	I0526 21:22:57.663848  527485 command_runner.go:124] > <HTML><HEAD><meta http-equiv="content-type" content="text/html;charset=utf-8">
	I0526 21:22:57.663864  527485 command_runner.go:124] > <TITLE>302 Moved</TITLE></HEAD><BODY>
	I0526 21:22:57.663870  527485 command_runner.go:124] > <H1>302 Moved</H1>
	I0526 21:22:57.663880  527485 command_runner.go:124] > The document has moved
	I0526 21:22:57.663889  527485 command_runner.go:124] > <A HREF="https://cloud.google.com/container-registry/">here</A>.
	I0526 21:22:57.663901  527485 command_runner.go:124] > </BODY></HTML>
	I0526 21:23:01.640010  527485 command_runner.go:124] > {
	I0526 21:23:01.640032  527485 command_runner.go:124] >   "images": [
	I0526 21:23:01.640039  527485 command_runner.go:124] >   ]
	I0526 21:23:01.640043  527485 command_runner.go:124] > }
	I0526 21:23:01.640717  527485 command_runner.go:124] ! time="2021-05-26T21:22:57Z" level=warning msg="image connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock]. As the default settings are now deprecated, you should set the endpoint instead."
	I0526 21:23:01.640757  527485 command_runner.go:124] ! time="2021-05-26T21:22:59Z" level=error msg="connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	I0526 21:23:01.640773  527485 command_runner.go:124] ! time="2021-05-26T21:23:01Z" level=error msg="connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	I0526 21:23:01.640790  527485 ssh_runner.go:189] Completed: sudo crictl images --output json: (4.002175504s)
	I0526 21:23:01.640891  527485 containerd.go:566] couldn't find preloaded image for "k8s.gcr.io/kube-apiserver:v1.20.2". assuming images are not preloaded.
	I0526 21:23:01.640946  527485 ssh_runner.go:149] Run: which lz4
	I0526 21:23:01.644596  527485 command_runner.go:124] > /bin/lz4
	I0526 21:23:01.644955  527485 vm_assets.go:98] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.20.2-containerd-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0526 21:23:01.645027  527485 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0526 21:23:01.649182  527485 command_runner.go:124] ! stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	I0526 21:23:01.649225  527485 ssh_runner.go:306] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	I0526 21:23:01.649244  527485 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.20.2-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (953722271 bytes)
	I0526 21:23:05.598453  527485 containerd.go:503] Took 3.953446 seconds to copy over tarball
	I0526 21:23:05.598520  527485 ssh_runner.go:149] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0526 21:23:12.109109  527485 ssh_runner.go:189] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (6.510561228s)
	I0526 21:23:12.109141  527485 containerd.go:510] Took 6.510657 seconds t extract the tarball
	I0526 21:23:12.109179  527485 ssh_runner.go:100] rm: /preloaded.tar.lz4
	I0526 21:23:12.170392  527485 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0526 21:23:12.334202  527485 ssh_runner.go:149] Run: sudo systemctl restart containerd
	I0526 21:23:12.374924  527485 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0526 21:23:12.384855  527485 ssh_runner.go:149] Run: sudo systemctl stop -f crio
	I0526 21:23:12.414104  527485 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0526 21:23:12.429040  527485 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0526 21:23:12.438147  527485 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0526 21:23:12.451339  527485 command_runner.go:124] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0526 21:23:12.451364  527485 command_runner.go:124] > image-endpoint: unix:///run/containerd/containerd.sock
	I0526 21:23:12.451502  527485 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "cm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKCltncnBjXQogIGFkZHJlc3MgPSAiL3J1bi9jb250YWluZXJkL2NvbnRhaW5lcmQuc29jayIKICB1aWQgPSAwCiAgZ2lkID0gMAogIG1heF9yZWN2X21lc3NhZ2Vfc2l6ZSA9IDE2Nzc3MjE2CiAgbWF4X3NlbmRfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKCltkZWJ1Z10KICBhZGRyZXNzID0gIiIKICB1aWQgPSAwCiAgZ2lkID0gMAogIGxldmVsID0gIiIKClttZXRyaWNzXQogIGFkZHJlc3MgPSAiIgogIGdycGNfaGlzdG9ncmFtID0gZmFsc2UKCltjZ3JvdXBdCiAgcGF0aCA9ICIiCgpbcGx1Z2luc10KICBbcGx1Z2lucy5jZ3JvdXBzXQogICAgbm9fcHJvbWV0aGV1cyA9IGZhbHNlCiAgW3BsdWdpbnMuY3JpXQogICAgc3RyZWFtX3NlcnZlcl9hZGRyZXNzID0gIiIKICAgIHN0cmVhbV9zZXJ2ZXJfcG9ydCA9ICIxMDAxMCIKICAgIGVuYWJsZV9zZWxpbnV4ID0gZmFsc2UKICAgIHNhbmRib3hfaW1hZ2UgPSAiazhzLmdjci5pby9wYXVzZTozLjIiCiAgICBzdGF0c19jb2xsZWN0X3BlcmlvZCA9IDEwCiAgICBzeXN0ZW1kX2Nncm91cCA9IGZhbHNlCiAgICBlbmFibGVfdGxzX3N0cmVhbWluZyA9IGZhbHNlCiAgICBtYXhfY29udGFpbmVyX2xvZ19saW5lX3Npe
mUgPSAxNjM4NAogICAgW3BsdWdpbnMuY3JpLmNvbnRhaW5lcmRdCiAgICAgIHNuYXBzaG90dGVyID0gIm92ZXJsYXlmcyIKICAgICAgW3BsdWdpbnMuY3JpLmNvbnRhaW5lcmQuZGVmYXVsdF9ydW50aW1lXQogICAgICAgIHJ1bnRpbWVfdHlwZSA9ICJpby5jb250YWluZXJkLnJ1bmMudjIiCiAgICAgICAgW3BsdWdpbnMuY3JpLmNvbnRhaW5lcmQuZGVmYXVsdF9ydW50aW1lLm9wdGlvbnNdCiAgICAgICAgICBOb1Bpdm90Um9vdCA9IHRydWUKICAgICAgW3BsdWdpbnMuY3JpLmNvbnRhaW5lcmQudW50cnVzdGVkX3dvcmtsb2FkX3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gIiIKICAgICAgICBydW50aW1lX2VuZ2luZSA9ICIiCiAgICAgICAgcnVudGltZV9yb290ID0gIiIKICAgIFtwbHVnaW5zLmNyaS5jbmldCiAgICAgIGJpbl9kaXIgPSAiL29wdC9jbmkvYmluIgogICAgICBjb25mX2RpciA9ICIvZXRjL2NuaS9uZXQubWsiCiAgICAgIGNvbmZfdGVtcGxhdGUgPSAiIgogICAgW3BsdWdpbnMuY3JpLnJlZ2lzdHJ5XQogICAgICBbcGx1Z2lucy5jcmkucmVnaXN0cnkubWlycm9yc10KICAgICAgICBbcGx1Z2lucy5jcmkucmVnaXN0cnkubWlycm9ycy4iZG9ja2VyLmlvIl0KICAgICAgICAgIGVuZHBvaW50ID0gWyJodHRwczovL3JlZ2lzdHJ5LTEuZG9ja2VyLmlvIl0KICAgICAgICBbcGx1Z2lucy5kaWZmLXNlcnZpY2VdCiAgICBkZWZhdWx0ID0gWyJ3YWxraW5nIl0KICBbcGx1Z2lucy5zY2hlZHVsZXJdCiAgICBwYXVzZV90aHJlc2hvb
GQgPSAwLjAyCiAgICBkZWxldGlvbl90aHJlc2hvbGQgPSAwCiAgICBtdXRhdGlvbl90aHJlc2hvbGQgPSAxMDAKICAgIHNjaGVkdWxlX2RlbGF5ID0gIjBzIgogICAgc3RhcnR1cF9kZWxheSA9ICIxMDBtcyIK" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0526 21:23:12.468915  527485 command_runner.go:124] > root = "/var/lib/containerd"
	I0526 21:23:12.468936  527485 command_runner.go:124] > state = "/run/containerd"
	I0526 21:23:12.468944  527485 command_runner.go:124] > oom_score = 0
	I0526 21:23:12.468949  527485 command_runner.go:124] > [grpc]
	I0526 21:23:12.468957  527485 command_runner.go:124] >   address = "/run/containerd/containerd.sock"
	I0526 21:23:12.468963  527485 command_runner.go:124] >   uid = 0
	I0526 21:23:12.468969  527485 command_runner.go:124] >   gid = 0
	I0526 21:23:12.468976  527485 command_runner.go:124] >   max_recv_message_size = 16777216
	I0526 21:23:12.468985  527485 command_runner.go:124] >   max_send_message_size = 16777216
	I0526 21:23:12.468990  527485 command_runner.go:124] > [debug]
	I0526 21:23:12.468998  527485 command_runner.go:124] >   address = ""
	I0526 21:23:12.469003  527485 command_runner.go:124] >   uid = 0
	I0526 21:23:12.469010  527485 command_runner.go:124] >   gid = 0
	I0526 21:23:12.469017  527485 command_runner.go:124] >   level = ""
	I0526 21:23:12.469025  527485 command_runner.go:124] > [metrics]
	I0526 21:23:12.469031  527485 command_runner.go:124] >   address = ""
	I0526 21:23:12.469040  527485 command_runner.go:124] >   grpc_histogram = false
	I0526 21:23:12.469046  527485 command_runner.go:124] > [cgroup]
	I0526 21:23:12.469052  527485 command_runner.go:124] >   path = ""
	I0526 21:23:12.469058  527485 command_runner.go:124] > [plugins]
	I0526 21:23:12.469065  527485 command_runner.go:124] >   [plugins.cgroups]
	I0526 21:23:12.469076  527485 command_runner.go:124] >     no_prometheus = false
	I0526 21:23:12.469084  527485 command_runner.go:124] >   [plugins.cri]
	I0526 21:23:12.469090  527485 command_runner.go:124] >     stream_server_address = ""
	I0526 21:23:12.469101  527485 command_runner.go:124] >     stream_server_port = "10010"
	I0526 21:23:12.469108  527485 command_runner.go:124] >     enable_selinux = false
	I0526 21:23:12.469118  527485 command_runner.go:124] >     sandbox_image = "k8s.gcr.io/pause:3.2"
	I0526 21:23:12.469126  527485 command_runner.go:124] >     stats_collect_period = 10
	I0526 21:23:12.469133  527485 command_runner.go:124] >     systemd_cgroup = false
	I0526 21:23:12.469144  527485 command_runner.go:124] >     enable_tls_streaming = false
	I0526 21:23:12.469151  527485 command_runner.go:124] >     max_container_log_line_size = 16384
	I0526 21:23:12.469158  527485 command_runner.go:124] >     [plugins.cri.containerd]
	I0526 21:23:12.469165  527485 command_runner.go:124] >       snapshotter = "overlayfs"
	I0526 21:23:12.469174  527485 command_runner.go:124] >       [plugins.cri.containerd.default_runtime]
	I0526 21:23:12.469181  527485 command_runner.go:124] >         runtime_type = "io.containerd.runc.v2"
	I0526 21:23:12.469193  527485 command_runner.go:124] >         [plugins.cri.containerd.default_runtime.options]
	I0526 21:23:12.469201  527485 command_runner.go:124] >           NoPivotRoot = true
	I0526 21:23:12.469209  527485 command_runner.go:124] >       [plugins.cri.containerd.untrusted_workload_runtime]
	I0526 21:23:12.469219  527485 command_runner.go:124] >         runtime_type = ""
	I0526 21:23:12.469225  527485 command_runner.go:124] >         runtime_engine = ""
	I0526 21:23:12.469238  527485 command_runner.go:124] >         runtime_root = ""
	I0526 21:23:12.469244  527485 command_runner.go:124] >     [plugins.cri.cni]
	I0526 21:23:12.469252  527485 command_runner.go:124] >       bin_dir = "/opt/cni/bin"
	I0526 21:23:12.469259  527485 command_runner.go:124] >       conf_dir = "/etc/cni/net.mk"
	I0526 21:23:12.469268  527485 command_runner.go:124] >       conf_template = ""
	I0526 21:23:12.469275  527485 command_runner.go:124] >     [plugins.cri.registry]
	I0526 21:23:12.469293  527485 command_runner.go:124] >       [plugins.cri.registry.mirrors]
	I0526 21:23:12.469305  527485 command_runner.go:124] >         [plugins.cri.registry.mirrors."docker.io"]
	I0526 21:23:12.469314  527485 command_runner.go:124] >           endpoint = ["https://registry-1.docker.io"]
	I0526 21:23:12.469322  527485 command_runner.go:124] >         [plugins.diff-service]
	I0526 21:23:12.469329  527485 command_runner.go:124] >     default = ["walking"]
	I0526 21:23:12.469336  527485 command_runner.go:124] >   [plugins.scheduler]
	I0526 21:23:12.469343  527485 command_runner.go:124] >     pause_threshold = 0.02
	I0526 21:23:12.469350  527485 command_runner.go:124] >     deletion_threshold = 0
	I0526 21:23:12.469356  527485 command_runner.go:124] >     mutation_threshold = 100
	I0526 21:23:12.469365  527485 command_runner.go:124] >     schedule_delay = "0s"
	I0526 21:23:12.469372  527485 command_runner.go:124] >     startup_delay = "100ms"
	I0526 21:23:12.469420  527485 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0526 21:23:12.478525  527485 command_runner.go:124] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0526 21:23:12.478613  527485 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0526 21:23:12.478664  527485 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0526 21:23:12.492700  527485 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0526 21:23:12.499064  527485 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0526 21:23:12.612771  527485 ssh_runner.go:149] Run: sudo systemctl restart containerd
	I0526 21:23:16.654015  527485 ssh_runner.go:189] Completed: sudo systemctl restart containerd: (4.041197168s)
	I0526 21:23:16.654058  527485 start.go:376] Will wait 60s for socket path /run/containerd/containerd.sock
	I0526 21:23:16.654117  527485 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0526 21:23:16.662580  527485 command_runner.go:124] ! stat: cannot stat '/run/containerd/containerd.sock': No such file or directory
	I0526 21:23:16.662621  527485 retry.go:31] will retry after 1.104660288s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/run/containerd/containerd.sock': No such file or directory
	I0526 21:23:17.767943  527485 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0526 21:23:17.773490  527485 command_runner.go:124] >   File: /run/containerd/containerd.sock
	I0526 21:23:17.773516  527485 command_runner.go:124] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0526 21:23:17.773524  527485 command_runner.go:124] > Device: 14h/20d	Inode: 29618       Links: 1
	I0526 21:23:17.773532  527485 command_runner.go:124] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0526 21:23:17.773538  527485 command_runner.go:124] > Access: 2021-05-26 21:23:16.714470320 +0000
	I0526 21:23:17.773543  527485 command_runner.go:124] > Modify: 2021-05-26 21:23:16.714470320 +0000
	I0526 21:23:17.773549  527485 command_runner.go:124] > Change: 2021-05-26 21:23:16.714470320 +0000
	I0526 21:23:17.773553  527485 command_runner.go:124] >  Birth: -
	I0526 21:23:17.773923  527485 start.go:401] Will wait 60s for crictl version
	I0526 21:23:17.773983  527485 ssh_runner.go:149] Run: sudo crictl version
	I0526 21:23:17.795074  527485 command_runner.go:124] > Version:  0.1.0
	I0526 21:23:17.795168  527485 command_runner.go:124] > RuntimeName:  containerd
	I0526 21:23:17.795509  527485 command_runner.go:124] > RuntimeVersion:  v1.4.4
	I0526 21:23:17.795668  527485 command_runner.go:124] > RuntimeApiVersion:  v1alpha2
	I0526 21:23:17.797077  527485 start.go:410] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.4.4
	RuntimeApiVersion:  v1alpha2
	I0526 21:23:17.797129  527485 ssh_runner.go:149] Run: containerd --version
	I0526 21:23:17.825318  527485 command_runner.go:124] > containerd github.com/containerd/containerd v1.4.4 05f951a3781f4f2c1911b05e61c160e9c30eaa8e
	I0526 21:23:17.827109  527485 out.go:170] * Preparing Kubernetes v1.20.2 on containerd 1.4.4 ...
	I0526 21:23:17.827153  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetIP
	I0526 21:23:17.832341  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | domain multinode-20210526212238-510955 has defined MAC address 52:54:00:0c:8b:34 in network mk-multinode-20210526212238-510955
	I0526 21:23:17.832680  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:8b:34", ip: ""} in network mk-multinode-20210526212238-510955: {Iface:virbr1 ExpiryTime:2021-05-26 22:22:53 +0000 UTC Type:0 Mac:52:54:00:0c:8b:34 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:multinode-20210526212238-510955 Clientid:01:52:54:00:0c:8b:34}
	I0526 21:23:17.832703  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | domain multinode-20210526212238-510955 has defined IP address 192.168.39.229 and MAC address 52:54:00:0c:8b:34 in network mk-multinode-20210526212238-510955
	I0526 21:23:17.832883  527485 ssh_runner.go:149] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0526 21:23:17.837078  527485 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0526 21:23:17.847842  527485 preload.go:98] Checking if preload exists for k8s version v1.20.2 and runtime containerd
	I0526 21:23:17.847865  527485 preload.go:106] Found local preload: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.20.2-containerd-overlay2-amd64.tar.lz4
	I0526 21:23:17.847902  527485 ssh_runner.go:149] Run: sudo crictl images --output json
	I0526 21:23:17.868882  527485 command_runner.go:124] > {
	I0526 21:23:17.868898  527485 command_runner.go:124] >   "images": [
	I0526 21:23:17.868904  527485 command_runner.go:124] >     {
	I0526 21:23:17.868922  527485 command_runner.go:124] >       "id": "sha256:6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb",
	I0526 21:23:17.868928  527485 command_runner.go:124] >       "repoTags": [
	I0526 21:23:17.868938  527485 command_runner.go:124] >         "docker.io/kindest/kindnetd:v20210326-1e038dc5"
	I0526 21:23:17.868943  527485 command_runner.go:124] >       ],
	I0526 21:23:17.868950  527485 command_runner.go:124] >       "repoDigests": [
	I0526 21:23:17.868963  527485 command_runner.go:124] >         "docker.io/kindest/kindnetd@sha256:838bc1706e38391aefaa31fd52619fe8e57ad3dfb0d0ff414d902367fcc24c3c"
	I0526 21:23:17.868977  527485 command_runner.go:124] >       ],
	I0526 21:23:17.868984  527485 command_runner.go:124] >       "size": "53960776",
	I0526 21:23:17.868990  527485 command_runner.go:124] >       "uid": null,
	I0526 21:23:17.868996  527485 command_runner.go:124] >       "username": "",
	I0526 21:23:17.869006  527485 command_runner.go:124] >       "spec": null
	I0526 21:23:17.869015  527485 command_runner.go:124] >     },
	I0526 21:23:17.869020  527485 command_runner.go:124] >     {
	I0526 21:23:17.869036  527485 command_runner.go:124] >       "id": "sha256:9a07b5b4bfac07e5cfc27f76c34516a3ad2fdfa3f683f375141fe662ef2e72db",
	I0526 21:23:17.869046  527485 command_runner.go:124] >       "repoTags": [
	I0526 21:23:17.869054  527485 command_runner.go:124] >         "docker.io/kubernetesui/dashboard:v2.1.0"
	I0526 21:23:17.869063  527485 command_runner.go:124] >       ],
	I0526 21:23:17.869069  527485 command_runner.go:124] >       "repoDigests": [
	I0526 21:23:17.869083  527485 command_runner.go:124] >         "docker.io/kubernetesui/dashboard@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6"
	I0526 21:23:17.869094  527485 command_runner.go:124] >       ],
	I0526 21:23:17.869100  527485 command_runner.go:124] >       "size": "67992170",
	I0526 21:23:17.869107  527485 command_runner.go:124] >       "uid": null,
	I0526 21:23:17.869113  527485 command_runner.go:124] >       "username": "nonroot",
	I0526 21:23:17.869121  527485 command_runner.go:124] >       "spec": null
	I0526 21:23:17.869125  527485 command_runner.go:124] >     },
	I0526 21:23:17.869133  527485 command_runner.go:124] >     {
	I0526 21:23:17.869143  527485 command_runner.go:124] >       "id": "sha256:86262685d9abb35698a4e03ed13f9ded5b97c6c85b466285e4f367e5232eeee4",
	I0526 21:23:17.869152  527485 command_runner.go:124] >       "repoTags": [
	I0526 21:23:17.869161  527485 command_runner.go:124] >         "docker.io/kubernetesui/metrics-scraper:v1.0.4"
	I0526 21:23:17.869169  527485 command_runner.go:124] >       ],
	I0526 21:23:17.869176  527485 command_runner.go:124] >       "repoDigests": [
	I0526 21:23:17.869188  527485 command_runner.go:124] >         "docker.io/kubernetesui/metrics-scraper@sha256:555981a24f184420f3be0c79d4efb6c948a85cfce84034f85a563f4151a81cbf"
	I0526 21:23:17.869195  527485 command_runner.go:124] >       ],
	I0526 21:23:17.869201  527485 command_runner.go:124] >       "size": "16020077",
	I0526 21:23:17.869211  527485 command_runner.go:124] >       "uid": null,
	I0526 21:23:17.869222  527485 command_runner.go:124] >       "username": "",
	I0526 21:23:17.869228  527485 command_runner.go:124] >       "spec": null
	I0526 21:23:17.869234  527485 command_runner.go:124] >     },
	I0526 21:23:17.869239  527485 command_runner.go:124] >     {
	I0526 21:23:17.869251  527485 command_runner.go:124] >       "id": "sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0526 21:23:17.869259  527485 command_runner.go:124] >       "repoTags": [
	I0526 21:23:17.869268  527485 command_runner.go:124] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0526 21:23:17.869277  527485 command_runner.go:124] >       ],
	I0526 21:23:17.869284  527485 command_runner.go:124] >       "repoDigests": [
	I0526 21:23:17.869299  527485 command_runner.go:124] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I0526 21:23:17.869317  527485 command_runner.go:124] >       ],
	I0526 21:23:17.869324  527485 command_runner.go:124] >       "size": "9058936",
	I0526 21:23:17.869331  527485 command_runner.go:124] >       "uid": null,
	I0526 21:23:17.869338  527485 command_runner.go:124] >       "username": "",
	I0526 21:23:17.869351  527485 command_runner.go:124] >       "spec": null
	I0526 21:23:17.869357  527485 command_runner.go:124] >     },
	I0526 21:23:17.869363  527485 command_runner.go:124] >     {
	I0526 21:23:17.869376  527485 command_runner.go:124] >       "id": "sha256:bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16",
	I0526 21:23:17.869384  527485 command_runner.go:124] >       "repoTags": [
	I0526 21:23:17.869391  527485 command_runner.go:124] >         "k8s.gcr.io/coredns:1.7.0"
	I0526 21:23:17.869399  527485 command_runner.go:124] >       ],
	I0526 21:23:17.869405  527485 command_runner.go:124] >       "repoDigests": [
	I0526 21:23:17.869418  527485 command_runner.go:124] >         "k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c"
	I0526 21:23:17.869424  527485 command_runner.go:124] >       ],
	I0526 21:23:17.869430  527485 command_runner.go:124] >       "size": "13982350",
	I0526 21:23:17.869437  527485 command_runner.go:124] >       "uid": null,
	I0526 21:23:17.869443  527485 command_runner.go:124] >       "username": "",
	I0526 21:23:17.869451  527485 command_runner.go:124] >       "spec": null
	I0526 21:23:17.869456  527485 command_runner.go:124] >     },
	I0526 21:23:17.869464  527485 command_runner.go:124] >     {
	I0526 21:23:17.869474  527485 command_runner.go:124] >       "id": "sha256:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934",
	I0526 21:23:17.869483  527485 command_runner.go:124] >       "repoTags": [
	I0526 21:23:17.869491  527485 command_runner.go:124] >         "k8s.gcr.io/etcd:3.4.13-0"
	I0526 21:23:17.869497  527485 command_runner.go:124] >       ],
	I0526 21:23:17.869502  527485 command_runner.go:124] >       "repoDigests": [
	I0526 21:23:17.869514  527485 command_runner.go:124] >         "k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2"
	I0526 21:23:17.869521  527485 command_runner.go:124] >       ],
	I0526 21:23:17.869527  527485 command_runner.go:124] >       "size": "86742272",
	I0526 21:23:17.869534  527485 command_runner.go:124] >       "uid": null,
	I0526 21:23:17.869540  527485 command_runner.go:124] >       "username": "",
	I0526 21:23:17.869546  527485 command_runner.go:124] >       "spec": null
	I0526 21:23:17.869553  527485 command_runner.go:124] >     },
	I0526 21:23:17.869558  527485 command_runner.go:124] >     {
	I0526 21:23:17.869569  527485 command_runner.go:124] >       "id": "sha256:a8c2fdb8bf76e3b014d14ce69a6a2d11044cb13b4ec3185015c582b8ad69a820",
	I0526 21:23:17.869581  527485 command_runner.go:124] >       "repoTags": [
	I0526 21:23:17.869589  527485 command_runner.go:124] >         "k8s.gcr.io/kube-apiserver:v1.20.2"
	I0526 21:23:17.869596  527485 command_runner.go:124] >       ],
	I0526 21:23:17.869603  527485 command_runner.go:124] >       "repoDigests": [
	I0526 21:23:17.869616  527485 command_runner.go:124] >         "k8s.gcr.io/kube-apiserver@sha256:465ba895d578fbc1c6e299e45689381fd01c54400beba9e8f1d7456077411411"
	I0526 21:23:17.869625  527485 command_runner.go:124] >       ],
	I0526 21:23:17.869631  527485 command_runner.go:124] >       "size": "30411317",
	I0526 21:23:17.869637  527485 command_runner.go:124] >       "uid": {
	I0526 21:23:17.869645  527485 command_runner.go:124] >         "value": "0"
	I0526 21:23:17.869653  527485 command_runner.go:124] >       },
	I0526 21:23:17.869660  527485 command_runner.go:124] >       "username": "",
	I0526 21:23:17.869667  527485 command_runner.go:124] >       "spec": null
	I0526 21:23:17.869673  527485 command_runner.go:124] >     },
	I0526 21:23:17.869679  527485 command_runner.go:124] >     {
	I0526 21:23:17.869690  527485 command_runner.go:124] >       "id": "sha256:a27166429d98e07152ca71420931142127609f715925b1607acee6ea6f0e3696",
	I0526 21:23:17.869697  527485 command_runner.go:124] >       "repoTags": [
	I0526 21:23:17.869707  527485 command_runner.go:124] >         "k8s.gcr.io/kube-controller-manager:v1.20.2"
	I0526 21:23:17.869712  527485 command_runner.go:124] >       ],
	I0526 21:23:17.869721  527485 command_runner.go:124] >       "repoDigests": [
	I0526 21:23:17.869735  527485 command_runner.go:124] >         "k8s.gcr.io/kube-controller-manager@sha256:842a071d4ad49b0018f7f7404ac8a4ddfc2bce2ce15b3f8131d89563fda36c9b"
	I0526 21:23:17.869744  527485 command_runner.go:124] >       ],
	I0526 21:23:17.869751  527485 command_runner.go:124] >       "size": "29362302",
	I0526 21:23:17.869759  527485 command_runner.go:124] >       "uid": {
	I0526 21:23:17.869766  527485 command_runner.go:124] >         "value": "0"
	I0526 21:23:17.869772  527485 command_runner.go:124] >       },
	I0526 21:23:17.869778  527485 command_runner.go:124] >       "username": "",
	I0526 21:23:17.869786  527485 command_runner.go:124] >       "spec": null
	I0526 21:23:17.869792  527485 command_runner.go:124] >     },
	I0526 21:23:17.869798  527485 command_runner.go:124] >     {
	I0526 21:23:17.869808  527485 command_runner.go:124] >       "id": "sha256:43154ddb57a83de3068fe603e9c7393e7d2b77cb18d9e0daf869f74b1b4079c0",
	I0526 21:23:17.869817  527485 command_runner.go:124] >       "repoTags": [
	I0526 21:23:17.869824  527485 command_runner.go:124] >         "k8s.gcr.io/kube-proxy:v1.20.2"
	I0526 21:23:17.869832  527485 command_runner.go:124] >       ],
	I0526 21:23:17.869838  527485 command_runner.go:124] >       "repoDigests": [
	I0526 21:23:17.869851  527485 command_runner.go:124] >         "k8s.gcr.io/kube-proxy@sha256:326fe8a4508a5db91cf234c4867eff5ba458bc4107c2a7e15c827a74faa19be9"
	I0526 21:23:17.869859  527485 command_runner.go:124] >       ],
	I0526 21:23:17.869866  527485 command_runner.go:124] >       "size": "49539606",
	I0526 21:23:17.869873  527485 command_runner.go:124] >       "uid": null,
	I0526 21:23:17.869879  527485 command_runner.go:124] >       "username": "",
	I0526 21:23:17.869885  527485 command_runner.go:124] >       "spec": null
	I0526 21:23:17.869892  527485 command_runner.go:124] >     },
	I0526 21:23:17.869896  527485 command_runner.go:124] >     {
	I0526 21:23:17.869907  527485 command_runner.go:124] >       "id": "sha256:ed2c44fbdd78b69a0981ab3c57ebce2798e4a4b2b5dda2fabc720f9957d4869f",
	I0526 21:23:17.869916  527485 command_runner.go:124] >       "repoTags": [
	I0526 21:23:17.869924  527485 command_runner.go:124] >         "k8s.gcr.io/kube-scheduler:v1.20.2"
	I0526 21:23:17.869930  527485 command_runner.go:124] >       ],
	I0526 21:23:17.869937  527485 command_runner.go:124] >       "repoDigests": [
	I0526 21:23:17.869949  527485 command_runner.go:124] >         "k8s.gcr.io/kube-scheduler@sha256:304b3d70497bd62498f19f82f9ef164d38948e5ae94966690abfe9d1858867e2"
	I0526 21:23:17.869958  527485 command_runner.go:124] >       ],
	I0526 21:23:17.869964  527485 command_runner.go:124] >       "size": "14012937",
	I0526 21:23:17.869971  527485 command_runner.go:124] >       "uid": {
	I0526 21:23:17.869977  527485 command_runner.go:124] >         "value": "0"
	I0526 21:23:17.869983  527485 command_runner.go:124] >       },
	I0526 21:23:17.870024  527485 command_runner.go:124] >       "username": "",
	I0526 21:23:17.870035  527485 command_runner.go:124] >       "spec": null
	I0526 21:23:17.870040  527485 command_runner.go:124] >     },
	I0526 21:23:17.870045  527485 command_runner.go:124] >     {
	I0526 21:23:17.870059  527485 command_runner.go:124] >       "id": "sha256:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c",
	I0526 21:23:17.870068  527485 command_runner.go:124] >       "repoTags": [
	I0526 21:23:17.870075  527485 command_runner.go:124] >         "k8s.gcr.io/pause:3.2"
	I0526 21:23:17.870081  527485 command_runner.go:124] >       ],
	I0526 21:23:17.870088  527485 command_runner.go:124] >       "repoDigests": [
	I0526 21:23:17.870100  527485 command_runner.go:124] >         "k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f"
	I0526 21:23:17.870109  527485 command_runner.go:124] >       ],
	I0526 21:23:17.870116  527485 command_runner.go:124] >       "size": "299513",
	I0526 21:23:17.870123  527485 command_runner.go:124] >       "uid": null,
	I0526 21:23:17.870129  527485 command_runner.go:124] >       "username": "",
	I0526 21:23:17.870136  527485 command_runner.go:124] >       "spec": null
	I0526 21:23:17.870141  527485 command_runner.go:124] >     }
	I0526 21:23:17.870146  527485 command_runner.go:124] >   ]
	I0526 21:23:17.870153  527485 command_runner.go:124] > }
	I0526 21:23:17.870585  527485 containerd.go:570] all images are preloaded for containerd runtime.
	I0526 21:23:17.870600  527485 containerd.go:474] Images already preloaded, skipping extraction
	I0526 21:23:17.870632  527485 ssh_runner.go:149] Run: sudo crictl images --output json
	I0526 21:23:17.889731  527485 command_runner.go:124] > {
	I0526 21:23:17.889744  527485 command_runner.go:124] >   "images": [
	I0526 21:23:17.889748  527485 command_runner.go:124] >     {
	I0526 21:23:17.889757  527485 command_runner.go:124] >       "id": "sha256:6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb",
	I0526 21:23:17.889762  527485 command_runner.go:124] >       "repoTags": [
	I0526 21:23:17.889775  527485 command_runner.go:124] >         "docker.io/kindest/kindnetd:v20210326-1e038dc5"
	I0526 21:23:17.889786  527485 command_runner.go:124] >       ],
	I0526 21:23:17.889797  527485 command_runner.go:124] >       "repoDigests": [
	I0526 21:23:17.889808  527485 command_runner.go:124] >         "docker.io/kindest/kindnetd@sha256:838bc1706e38391aefaa31fd52619fe8e57ad3dfb0d0ff414d902367fcc24c3c"
	I0526 21:23:17.889813  527485 command_runner.go:124] >       ],
	I0526 21:23:17.889818  527485 command_runner.go:124] >       "size": "53960776",
	I0526 21:23:17.889822  527485 command_runner.go:124] >       "uid": null,
	I0526 21:23:17.889826  527485 command_runner.go:124] >       "username": "",
	I0526 21:23:17.889830  527485 command_runner.go:124] >       "spec": null
	I0526 21:23:17.889833  527485 command_runner.go:124] >     },
	I0526 21:23:17.889837  527485 command_runner.go:124] >     {
	I0526 21:23:17.889846  527485 command_runner.go:124] >       "id": "sha256:9a07b5b4bfac07e5cfc27f76c34516a3ad2fdfa3f683f375141fe662ef2e72db",
	I0526 21:23:17.889861  527485 command_runner.go:124] >       "repoTags": [
	I0526 21:23:17.889869  527485 command_runner.go:124] >         "docker.io/kubernetesui/dashboard:v2.1.0"
	I0526 21:23:17.889876  527485 command_runner.go:124] >       ],
	I0526 21:23:17.889884  527485 command_runner.go:124] >       "repoDigests": [
	I0526 21:23:17.889897  527485 command_runner.go:124] >         "docker.io/kubernetesui/dashboard@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6"
	I0526 21:23:17.889902  527485 command_runner.go:124] >       ],
	I0526 21:23:17.889906  527485 command_runner.go:124] >       "size": "67992170",
	I0526 21:23:17.889910  527485 command_runner.go:124] >       "uid": null,
	I0526 21:23:17.889915  527485 command_runner.go:124] >       "username": "nonroot",
	I0526 21:23:17.889919  527485 command_runner.go:124] >       "spec": null
	I0526 21:23:17.889922  527485 command_runner.go:124] >     },
	I0526 21:23:17.889925  527485 command_runner.go:124] >     {
	I0526 21:23:17.889932  527485 command_runner.go:124] >       "id": "sha256:86262685d9abb35698a4e03ed13f9ded5b97c6c85b466285e4f367e5232eeee4",
	I0526 21:23:17.889937  527485 command_runner.go:124] >       "repoTags": [
	I0526 21:23:17.889945  527485 command_runner.go:124] >         "docker.io/kubernetesui/metrics-scraper:v1.0.4"
	I0526 21:23:17.889952  527485 command_runner.go:124] >       ],
	I0526 21:23:17.889964  527485 command_runner.go:124] >       "repoDigests": [
	I0526 21:23:17.889980  527485 command_runner.go:124] >         "docker.io/kubernetesui/metrics-scraper@sha256:555981a24f184420f3be0c79d4efb6c948a85cfce84034f85a563f4151a81cbf"
	I0526 21:23:17.889987  527485 command_runner.go:124] >       ],
	I0526 21:23:17.889994  527485 command_runner.go:124] >       "size": "16020077",
	I0526 21:23:17.890001  527485 command_runner.go:124] >       "uid": null,
	I0526 21:23:17.890006  527485 command_runner.go:124] >       "username": "",
	I0526 21:23:17.890010  527485 command_runner.go:124] >       "spec": null
	I0526 21:23:17.890013  527485 command_runner.go:124] >     },
	I0526 21:23:17.890017  527485 command_runner.go:124] >     {
	I0526 21:23:17.890023  527485 command_runner.go:124] >       "id": "sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0526 21:23:17.890030  527485 command_runner.go:124] >       "repoTags": [
	I0526 21:23:17.890038  527485 command_runner.go:124] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0526 21:23:17.890047  527485 command_runner.go:124] >       ],
	I0526 21:23:17.890055  527485 command_runner.go:124] >       "repoDigests": [
	I0526 21:23:17.890069  527485 command_runner.go:124] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I0526 21:23:17.890075  527485 command_runner.go:124] >       ],
	I0526 21:23:17.890082  527485 command_runner.go:124] >       "size": "9058936",
	I0526 21:23:17.890090  527485 command_runner.go:124] >       "uid": null,
	I0526 21:23:17.890096  527485 command_runner.go:124] >       "username": "",
	I0526 21:23:17.890102  527485 command_runner.go:124] >       "spec": null
	I0526 21:23:17.890105  527485 command_runner.go:124] >     },
	I0526 21:23:17.890109  527485 command_runner.go:124] >     {
	I0526 21:23:17.890119  527485 command_runner.go:124] >       "id": "sha256:bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16",
	I0526 21:23:17.890128  527485 command_runner.go:124] >       "repoTags": [
	I0526 21:23:17.890136  527485 command_runner.go:124] >         "k8s.gcr.io/coredns:1.7.0"
	I0526 21:23:17.890142  527485 command_runner.go:124] >       ],
	I0526 21:23:17.890148  527485 command_runner.go:124] >       "repoDigests": [
	I0526 21:23:17.890161  527485 command_runner.go:124] >         "k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c"
	I0526 21:23:17.890169  527485 command_runner.go:124] >       ],
	I0526 21:23:17.890175  527485 command_runner.go:124] >       "size": "13982350",
	I0526 21:23:17.890182  527485 command_runner.go:124] >       "uid": null,
	I0526 21:23:17.890187  527485 command_runner.go:124] >       "username": "",
	I0526 21:23:17.890191  527485 command_runner.go:124] >       "spec": null
	I0526 21:23:17.890195  527485 command_runner.go:124] >     },
	I0526 21:23:17.890205  527485 command_runner.go:124] >     {
	I0526 21:23:17.890219  527485 command_runner.go:124] >       "id": "sha256:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934",
	I0526 21:23:17.890226  527485 command_runner.go:124] >       "repoTags": [
	I0526 21:23:17.890233  527485 command_runner.go:124] >         "k8s.gcr.io/etcd:3.4.13-0"
	I0526 21:23:17.890239  527485 command_runner.go:124] >       ],
	I0526 21:23:17.890247  527485 command_runner.go:124] >       "repoDigests": [
	I0526 21:23:17.890258  527485 command_runner.go:124] >         "k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2"
	I0526 21:23:17.890266  527485 command_runner.go:124] >       ],
	I0526 21:23:17.890272  527485 command_runner.go:124] >       "size": "86742272",
	I0526 21:23:17.890276  527485 command_runner.go:124] >       "uid": null,
	I0526 21:23:17.890280  527485 command_runner.go:124] >       "username": "",
	I0526 21:23:17.890285  527485 command_runner.go:124] >       "spec": null
	I0526 21:23:17.890293  527485 command_runner.go:124] >     },
	I0526 21:23:17.890298  527485 command_runner.go:124] >     {
	I0526 21:23:17.890308  527485 command_runner.go:124] >       "id": "sha256:a8c2fdb8bf76e3b014d14ce69a6a2d11044cb13b4ec3185015c582b8ad69a820",
	I0526 21:23:17.890318  527485 command_runner.go:124] >       "repoTags": [
	I0526 21:23:17.890326  527485 command_runner.go:124] >         "k8s.gcr.io/kube-apiserver:v1.20.2"
	I0526 21:23:17.890331  527485 command_runner.go:124] >       ],
	I0526 21:23:17.890340  527485 command_runner.go:124] >       "repoDigests": [
	I0526 21:23:17.890353  527485 command_runner.go:124] >         "k8s.gcr.io/kube-apiserver@sha256:465ba895d578fbc1c6e299e45689381fd01c54400beba9e8f1d7456077411411"
	I0526 21:23:17.890358  527485 command_runner.go:124] >       ],
	I0526 21:23:17.890362  527485 command_runner.go:124] >       "size": "30411317",
	I0526 21:23:17.890366  527485 command_runner.go:124] >       "uid": {
	I0526 21:23:17.890372  527485 command_runner.go:124] >         "value": "0"
	I0526 21:23:17.890379  527485 command_runner.go:124] >       },
	I0526 21:23:17.890386  527485 command_runner.go:124] >       "username": "",
	I0526 21:23:17.890394  527485 command_runner.go:124] >       "spec": null
	I0526 21:23:17.890399  527485 command_runner.go:124] >     },
	I0526 21:23:17.890404  527485 command_runner.go:124] >     {
	I0526 21:23:17.890415  527485 command_runner.go:124] >       "id": "sha256:a27166429d98e07152ca71420931142127609f715925b1607acee6ea6f0e3696",
	I0526 21:23:17.890425  527485 command_runner.go:124] >       "repoTags": [
	I0526 21:23:17.890433  527485 command_runner.go:124] >         "k8s.gcr.io/kube-controller-manager:v1.20.2"
	I0526 21:23:17.890441  527485 command_runner.go:124] >       ],
	I0526 21:23:17.890445  527485 command_runner.go:124] >       "repoDigests": [
	I0526 21:23:17.890455  527485 command_runner.go:124] >         "k8s.gcr.io/kube-controller-manager@sha256:842a071d4ad49b0018f7f7404ac8a4ddfc2bce2ce15b3f8131d89563fda36c9b"
	I0526 21:23:17.890463  527485 command_runner.go:124] >       ],
	I0526 21:23:17.890470  527485 command_runner.go:124] >       "size": "29362302",
	I0526 21:23:17.890477  527485 command_runner.go:124] >       "uid": {
	I0526 21:23:17.890483  527485 command_runner.go:124] >         "value": "0"
	I0526 21:23:17.890490  527485 command_runner.go:124] >       },
	I0526 21:23:17.890496  527485 command_runner.go:124] >       "username": "",
	I0526 21:23:17.890504  527485 command_runner.go:124] >       "spec": null
	I0526 21:23:17.890509  527485 command_runner.go:124] >     },
	I0526 21:23:17.890514  527485 command_runner.go:124] >     {
	I0526 21:23:17.890524  527485 command_runner.go:124] >       "id": "sha256:43154ddb57a83de3068fe603e9c7393e7d2b77cb18d9e0daf869f74b1b4079c0",
	I0526 21:23:17.890533  527485 command_runner.go:124] >       "repoTags": [
	I0526 21:23:17.890540  527485 command_runner.go:124] >         "k8s.gcr.io/kube-proxy:v1.20.2"
	I0526 21:23:17.890551  527485 command_runner.go:124] >       ],
	I0526 21:23:17.890557  527485 command_runner.go:124] >       "repoDigests": [
	I0526 21:23:17.890571  527485 command_runner.go:124] >         "k8s.gcr.io/kube-proxy@sha256:326fe8a4508a5db91cf234c4867eff5ba458bc4107c2a7e15c827a74faa19be9"
	I0526 21:23:17.890576  527485 command_runner.go:124] >       ],
	I0526 21:23:17.890585  527485 command_runner.go:124] >       "size": "49539606",
	I0526 21:23:17.890591  527485 command_runner.go:124] >       "uid": null,
	I0526 21:23:17.890599  527485 command_runner.go:124] >       "username": "",
	I0526 21:23:17.890605  527485 command_runner.go:124] >       "spec": null
	I0526 21:23:17.890611  527485 command_runner.go:124] >     },
	I0526 21:23:17.890615  527485 command_runner.go:124] >     {
	I0526 21:23:17.890624  527485 command_runner.go:124] >       "id": "sha256:ed2c44fbdd78b69a0981ab3c57ebce2798e4a4b2b5dda2fabc720f9957d4869f",
	I0526 21:23:17.890633  527485 command_runner.go:124] >       "repoTags": [
	I0526 21:23:17.890641  527485 command_runner.go:124] >         "k8s.gcr.io/kube-scheduler:v1.20.2"
	I0526 21:23:17.890648  527485 command_runner.go:124] >       ],
	I0526 21:23:17.890654  527485 command_runner.go:124] >       "repoDigests": [
	I0526 21:23:17.890668  527485 command_runner.go:124] >         "k8s.gcr.io/kube-scheduler@sha256:304b3d70497bd62498f19f82f9ef164d38948e5ae94966690abfe9d1858867e2"
	I0526 21:23:17.890675  527485 command_runner.go:124] >       ],
	I0526 21:23:17.890685  527485 command_runner.go:124] >       "size": "14012937",
	I0526 21:23:17.890694  527485 command_runner.go:124] >       "uid": {
	I0526 21:23:17.890701  527485 command_runner.go:124] >         "value": "0"
	I0526 21:23:17.890705  527485 command_runner.go:124] >       },
	I0526 21:23:17.890735  527485 command_runner.go:124] >       "username": "",
	I0526 21:23:17.890745  527485 command_runner.go:124] >       "spec": null
	I0526 21:23:17.890750  527485 command_runner.go:124] >     },
	I0526 21:23:17.890756  527485 command_runner.go:124] >     {
	I0526 21:23:17.890770  527485 command_runner.go:124] >       "id": "sha256:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c",
	I0526 21:23:17.890775  527485 command_runner.go:124] >       "repoTags": [
	I0526 21:23:17.890787  527485 command_runner.go:124] >         "k8s.gcr.io/pause:3.2"
	I0526 21:23:17.890795  527485 command_runner.go:124] >       ],
	I0526 21:23:17.890802  527485 command_runner.go:124] >       "repoDigests": [
	I0526 21:23:17.890816  527485 command_runner.go:124] >         "k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f"
	I0526 21:23:17.890823  527485 command_runner.go:124] >       ],
	I0526 21:23:17.890829  527485 command_runner.go:124] >       "size": "299513",
	I0526 21:23:17.890837  527485 command_runner.go:124] >       "uid": null,
	I0526 21:23:17.890843  527485 command_runner.go:124] >       "username": "",
	I0526 21:23:17.890850  527485 command_runner.go:124] >       "spec": null
	I0526 21:23:17.890856  527485 command_runner.go:124] >     }
	I0526 21:23:17.890863  527485 command_runner.go:124] >   ]
	I0526 21:23:17.890868  527485 command_runner.go:124] > }
	I0526 21:23:17.891012  527485 containerd.go:570] all images are preloaded for containerd runtime.
	I0526 21:23:17.891026  527485 cache_images.go:74] Images are preloaded, skipping loading
	I0526 21:23:17.891071  527485 ssh_runner.go:149] Run: sudo crictl info
	I0526 21:23:17.909686  527485 command_runner.go:124] > {
	I0526 21:23:17.909698  527485 command_runner.go:124] >   "status": {
	I0526 21:23:17.909702  527485 command_runner.go:124] >     "conditions": [
	I0526 21:23:17.909706  527485 command_runner.go:124] >       {
	I0526 21:23:17.909711  527485 command_runner.go:124] >         "type": "RuntimeReady",
	I0526 21:23:17.909715  527485 command_runner.go:124] >         "status": true,
	I0526 21:23:17.909719  527485 command_runner.go:124] >         "reason": "",
	I0526 21:23:17.909724  527485 command_runner.go:124] >         "message": ""
	I0526 21:23:17.909727  527485 command_runner.go:124] >       },
	I0526 21:23:17.909734  527485 command_runner.go:124] >       {
	I0526 21:23:17.909739  527485 command_runner.go:124] >         "type": "NetworkReady",
	I0526 21:23:17.909743  527485 command_runner.go:124] >         "status": false,
	I0526 21:23:17.909749  527485 command_runner.go:124] >         "reason": "NetworkPluginNotReady",
	I0526 21:23:17.909756  527485 command_runner.go:124] >         "message": "Network plugin returns error: cni plugin not initialized"
	I0526 21:23:17.909776  527485 command_runner.go:124] >       }
	I0526 21:23:17.909789  527485 command_runner.go:124] >     ]
	I0526 21:23:17.909796  527485 command_runner.go:124] >   },
	I0526 21:23:17.909799  527485 command_runner.go:124] >   "cniconfig": {
	I0526 21:23:17.909803  527485 command_runner.go:124] >     "PluginDirs": [
	I0526 21:23:17.909808  527485 command_runner.go:124] >       "/opt/cni/bin"
	I0526 21:23:17.909812  527485 command_runner.go:124] >     ],
	I0526 21:23:17.909817  527485 command_runner.go:124] >     "PluginConfDir": "/etc/cni/net.mk",
	I0526 21:23:17.909821  527485 command_runner.go:124] >     "PluginMaxConfNum": 1,
	I0526 21:23:17.909825  527485 command_runner.go:124] >     "Prefix": "eth",
	I0526 21:23:17.909829  527485 command_runner.go:124] >     "Networks": [
	I0526 21:23:17.909833  527485 command_runner.go:124] >       {
	I0526 21:23:17.909837  527485 command_runner.go:124] >         "Config": {
	I0526 21:23:17.909841  527485 command_runner.go:124] >           "Name": "cni-loopback",
	I0526 21:23:17.909846  527485 command_runner.go:124] >           "CNIVersion": "0.3.1",
	I0526 21:23:17.909850  527485 command_runner.go:124] >           "Plugins": [
	I0526 21:23:17.909854  527485 command_runner.go:124] >             {
	I0526 21:23:17.909858  527485 command_runner.go:124] >               "Network": {
	I0526 21:23:17.909863  527485 command_runner.go:124] >                 "type": "loopback",
	I0526 21:23:17.909869  527485 command_runner.go:124] >                 "ipam": {},
	I0526 21:23:17.909873  527485 command_runner.go:124] >                 "dns": {}
	I0526 21:23:17.909879  527485 command_runner.go:124] >               },
	I0526 21:23:17.909885  527485 command_runner.go:124] >               "Source": "{\"type\":\"loopback\"}"
	I0526 21:23:17.909890  527485 command_runner.go:124] >             }
	I0526 21:23:17.909893  527485 command_runner.go:124] >           ],
	I0526 21:23:17.909902  527485 command_runner.go:124] >           "Source": "{\n\"cniVersion\": \"0.3.1\",\n\"name\": \"cni-loopback\",\n\"plugins\": [{\n  \"type\": \"loopback\"\n}]\n}"
	I0526 21:23:17.909908  527485 command_runner.go:124] >         },
	I0526 21:23:17.909912  527485 command_runner.go:124] >         "IFName": "lo"
	I0526 21:23:17.909917  527485 command_runner.go:124] >       }
	I0526 21:23:17.909921  527485 command_runner.go:124] >     ]
	I0526 21:23:17.909924  527485 command_runner.go:124] >   },
	I0526 21:23:17.909928  527485 command_runner.go:124] >   "config": {
	I0526 21:23:17.909935  527485 command_runner.go:124] >     "containerd": {
	I0526 21:23:17.909941  527485 command_runner.go:124] >       "snapshotter": "overlayfs",
	I0526 21:23:17.909949  527485 command_runner.go:124] >       "defaultRuntimeName": "default",
	I0526 21:23:17.909955  527485 command_runner.go:124] >       "defaultRuntime": {
	I0526 21:23:17.909960  527485 command_runner.go:124] >         "runtimeType": "io.containerd.runc.v2",
	I0526 21:23:17.909965  527485 command_runner.go:124] >         "runtimeEngine": "",
	I0526 21:23:17.909970  527485 command_runner.go:124] >         "PodAnnotations": null,
	I0526 21:23:17.909976  527485 command_runner.go:124] >         "ContainerAnnotations": null,
	I0526 21:23:17.909985  527485 command_runner.go:124] >         "runtimeRoot": "",
	I0526 21:23:17.909992  527485 command_runner.go:124] >         "options": {},
	I0526 21:23:17.909998  527485 command_runner.go:124] >         "privileged_without_host_devices": false,
	I0526 21:23:17.910002  527485 command_runner.go:124] >         "baseRuntimeSpec": ""
	I0526 21:23:17.910005  527485 command_runner.go:124] >       },
	I0526 21:23:17.910010  527485 command_runner.go:124] >       "untrustedWorkloadRuntime": {
	I0526 21:23:17.910015  527485 command_runner.go:124] >         "runtimeType": "",
	I0526 21:23:17.910020  527485 command_runner.go:124] >         "runtimeEngine": "",
	I0526 21:23:17.910026  527485 command_runner.go:124] >         "PodAnnotations": null,
	I0526 21:23:17.910031  527485 command_runner.go:124] >         "ContainerAnnotations": null,
	I0526 21:23:17.910037  527485 command_runner.go:124] >         "runtimeRoot": "",
	I0526 21:23:17.910041  527485 command_runner.go:124] >         "options": null,
	I0526 21:23:17.910046  527485 command_runner.go:124] >         "privileged_without_host_devices": false,
	I0526 21:23:17.910053  527485 command_runner.go:124] >         "baseRuntimeSpec": ""
	I0526 21:23:17.910056  527485 command_runner.go:124] >       },
	I0526 21:23:17.910060  527485 command_runner.go:124] >       "runtimes": {
	I0526 21:23:17.910064  527485 command_runner.go:124] >         "default": {
	I0526 21:23:17.910070  527485 command_runner.go:124] >           "runtimeType": "io.containerd.runc.v2",
	I0526 21:23:17.910075  527485 command_runner.go:124] >           "runtimeEngine": "",
	I0526 21:23:17.910080  527485 command_runner.go:124] >           "PodAnnotations": null,
	I0526 21:23:17.910087  527485 command_runner.go:124] >           "ContainerAnnotations": null,
	I0526 21:23:17.910092  527485 command_runner.go:124] >           "runtimeRoot": "",
	I0526 21:23:17.910098  527485 command_runner.go:124] >           "options": {},
	I0526 21:23:17.910108  527485 command_runner.go:124] >           "privileged_without_host_devices": false,
	I0526 21:23:17.910115  527485 command_runner.go:124] >           "baseRuntimeSpec": ""
	I0526 21:23:17.910118  527485 command_runner.go:124] >         },
	I0526 21:23:17.910122  527485 command_runner.go:124] >         "runc": {
	I0526 21:23:17.910127  527485 command_runner.go:124] >           "runtimeType": "io.containerd.runc.v2",
	I0526 21:23:17.910133  527485 command_runner.go:124] >           "runtimeEngine": "",
	I0526 21:23:17.910137  527485 command_runner.go:124] >           "PodAnnotations": null,
	I0526 21:23:17.910146  527485 command_runner.go:124] >           "ContainerAnnotations": null,
	I0526 21:23:17.910151  527485 command_runner.go:124] >           "runtimeRoot": "",
	I0526 21:23:17.910155  527485 command_runner.go:124] >           "options": {},
	I0526 21:23:17.910162  527485 command_runner.go:124] >           "privileged_without_host_devices": false,
	I0526 21:23:17.910169  527485 command_runner.go:124] >           "baseRuntimeSpec": ""
	I0526 21:23:17.910172  527485 command_runner.go:124] >         }
	I0526 21:23:17.910176  527485 command_runner.go:124] >       },
	I0526 21:23:17.910180  527485 command_runner.go:124] >       "noPivot": false,
	I0526 21:23:17.910185  527485 command_runner.go:124] >       "disableSnapshotAnnotations": true,
	I0526 21:23:17.910189  527485 command_runner.go:124] >       "discardUnpackedLayers": false
	I0526 21:23:17.910193  527485 command_runner.go:124] >     },
	I0526 21:23:17.910196  527485 command_runner.go:124] >     "cni": {
	I0526 21:23:17.910200  527485 command_runner.go:124] >       "binDir": "/opt/cni/bin",
	I0526 21:23:17.910205  527485 command_runner.go:124] >       "confDir": "/etc/cni/net.mk",
	I0526 21:23:17.910209  527485 command_runner.go:124] >       "maxConfNum": 1,
	I0526 21:23:17.910213  527485 command_runner.go:124] >       "confTemplate": ""
	I0526 21:23:17.910216  527485 command_runner.go:124] >     },
	I0526 21:23:17.910220  527485 command_runner.go:124] >     "registry": {
	I0526 21:23:17.910224  527485 command_runner.go:124] >       "mirrors": {
	I0526 21:23:17.910228  527485 command_runner.go:124] >         "docker.io": {
	I0526 21:23:17.910232  527485 command_runner.go:124] >           "endpoint": [
	I0526 21:23:17.910237  527485 command_runner.go:124] >             "https://registry-1.docker.io"
	I0526 21:23:17.910240  527485 command_runner.go:124] >           ]
	I0526 21:23:17.910244  527485 command_runner.go:124] >         }
	I0526 21:23:17.910247  527485 command_runner.go:124] >       },
	I0526 21:23:17.910251  527485 command_runner.go:124] >       "configs": null,
	I0526 21:23:17.910255  527485 command_runner.go:124] >       "auths": null,
	I0526 21:23:17.910259  527485 command_runner.go:124] >       "headers": null
	I0526 21:23:17.910262  527485 command_runner.go:124] >     },
	I0526 21:23:17.910266  527485 command_runner.go:124] >     "imageDecryption": {
	I0526 21:23:17.910270  527485 command_runner.go:124] >       "keyModel": ""
	I0526 21:23:17.910273  527485 command_runner.go:124] >     },
	I0526 21:23:17.910277  527485 command_runner.go:124] >     "disableTCPService": true,
	I0526 21:23:17.910283  527485 command_runner.go:124] >     "streamServerAddress": "",
	I0526 21:23:17.910287  527485 command_runner.go:124] >     "streamServerPort": "10010",
	I0526 21:23:17.910295  527485 command_runner.go:124] >     "streamIdleTimeout": "4h0m0s",
	I0526 21:23:17.910299  527485 command_runner.go:124] >     "enableSelinux": false,
	I0526 21:23:17.910304  527485 command_runner.go:124] >     "selinuxCategoryRange": 1024,
	I0526 21:23:17.910309  527485 command_runner.go:124] >     "sandboxImage": "k8s.gcr.io/pause:3.2",
	I0526 21:23:17.910313  527485 command_runner.go:124] >     "statsCollectPeriod": 10,
	I0526 21:23:17.910317  527485 command_runner.go:124] >     "systemdCgroup": false,
	I0526 21:23:17.910323  527485 command_runner.go:124] >     "enableTLSStreaming": false,
	I0526 21:23:17.910329  527485 command_runner.go:124] >     "x509KeyPairStreaming": {
	I0526 21:23:17.910333  527485 command_runner.go:124] >       "tlsCertFile": "",
	I0526 21:23:17.910337  527485 command_runner.go:124] >       "tlsKeyFile": ""
	I0526 21:23:17.910340  527485 command_runner.go:124] >     },
	I0526 21:23:17.910345  527485 command_runner.go:124] >     "maxContainerLogSize": 16384,
	I0526 21:23:17.910349  527485 command_runner.go:124] >     "disableCgroup": false,
	I0526 21:23:17.910353  527485 command_runner.go:124] >     "disableApparmor": false,
	I0526 21:23:17.910357  527485 command_runner.go:124] >     "restrictOOMScoreAdj": false,
	I0526 21:23:17.910362  527485 command_runner.go:124] >     "maxConcurrentDownloads": 3,
	I0526 21:23:17.910366  527485 command_runner.go:124] >     "disableProcMount": false,
	I0526 21:23:17.910370  527485 command_runner.go:124] >     "unsetSeccompProfile": "",
	I0526 21:23:17.910375  527485 command_runner.go:124] >     "tolerateMissingHugetlbController": true,
	I0526 21:23:17.910381  527485 command_runner.go:124] >     "disableHugetlbController": true,
	I0526 21:23:17.910388  527485 command_runner.go:124] >     "ignoreImageDefinedVolumes": false,
	I0526 21:23:17.910398  527485 command_runner.go:124] >     "containerdRootDir": "/mnt/vda1/var/lib/containerd",
	I0526 21:23:17.910404  527485 command_runner.go:124] >     "containerdEndpoint": "/run/containerd/containerd.sock",
	I0526 21:23:17.910412  527485 command_runner.go:124] >     "rootDir": "/mnt/vda1/var/lib/containerd/io.containerd.grpc.v1.cri",
	I0526 21:23:17.910417  527485 command_runner.go:124] >     "stateDir": "/run/containerd/io.containerd.grpc.v1.cri"
	I0526 21:23:17.910420  527485 command_runner.go:124] >   },
	I0526 21:23:17.910425  527485 command_runner.go:124] >   "golang": "go1.13.15",
	I0526 21:23:17.910456  527485 command_runner.go:124] >   "lastCNILoadStatus": "cni config load failed: no network config found in /etc/cni/net.mk: cni plugin not initialized: failed to load cni config"
	I0526 21:23:17.910463  527485 command_runner.go:124] > }
	I0526 21:23:17.910821  527485 cni.go:93] Creating CNI manager for ""
	I0526 21:23:17.910838  527485 cni.go:154] 1 nodes found, recommending kindnet
	I0526 21:23:17.910848  527485 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0526 21:23:17.910861  527485 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.229 APIServerPort:8443 KubernetesVersion:v1.20.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-20210526212238-510955 NodeName:multinode-20210526212238-510955 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.229"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.39.229 CgroupDriver:cgroupfs C
lientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0526 21:23:17.911001  527485 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.229
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "multinode-20210526212238-510955"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.229
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.229"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	
	I0526 21:23:17.911073  527485 kubeadm.go:909] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=multinode-20210526212238-510955 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.229 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.2 ClusterName:multinode-20210526212238-510955 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0526 21:23:17.911118  527485 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.20.2
	I0526 21:23:17.918180  527485 command_runner.go:124] > kubeadm
	I0526 21:23:17.918191  527485 command_runner.go:124] > kubectl
	I0526 21:23:17.918194  527485 command_runner.go:124] > kubelet
	I0526 21:23:17.918580  527485 binaries.go:44] Found k8s binaries, skipping transfer
	I0526 21:23:17.918625  527485 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0526 21:23:17.925392  527485 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (578 bytes)
	I0526 21:23:17.937000  527485 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0526 21:23:17.948647  527485 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1903 bytes)
	I0526 21:23:17.961092  527485 ssh_runner.go:149] Run: grep 192.168.39.229	control-plane.minikube.internal$ /etc/hosts
	I0526 21:23:17.965162  527485 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.229	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0526 21:23:17.975398  527485 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/multinode-20210526212238-510955 for IP: 192.168.39.229
	I0526 21:23:17.975437  527485 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/ca.key
	I0526 21:23:17.975450  527485 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/proxy-client-ca.key
	I0526 21:23:17.975492  527485 certs.go:294] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/multinode-20210526212238-510955/client.key
	I0526 21:23:17.975501  527485 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/multinode-20210526212238-510955/client.crt with IP's: []
	I0526 21:23:18.150789  527485 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/multinode-20210526212238-510955/client.crt ...
	I0526 21:23:18.150819  527485 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/multinode-20210526212238-510955/client.crt: {Name:mka353ee94583202e0ac0ab8b589d54e00abd226 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0526 21:23:18.151035  527485 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/multinode-20210526212238-510955/client.key ...
	I0526 21:23:18.151068  527485 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/multinode-20210526212238-510955/client.key: {Name:mk56ed57fbfad1ce9204b3afb46ba92eb135d7dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0526 21:23:18.151198  527485 certs.go:294] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/multinode-20210526212238-510955/apiserver.key.24f4b2b2
	I0526 21:23:18.151213  527485 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/multinode-20210526212238-510955/apiserver.crt.24f4b2b2 with IP's: [192.168.39.229 10.96.0.1 127.0.0.1 10.0.0.1]
	I0526 21:23:18.319161  527485 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/multinode-20210526212238-510955/apiserver.crt.24f4b2b2 ...
	I0526 21:23:18.319186  527485 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/multinode-20210526212238-510955/apiserver.crt.24f4b2b2: {Name:mk60b9f5977b906dd74e7409f8fce67aafe5ae90 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0526 21:23:18.319342  527485 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/multinode-20210526212238-510955/apiserver.key.24f4b2b2 ...
	I0526 21:23:18.319355  527485 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/multinode-20210526212238-510955/apiserver.key.24f4b2b2: {Name:mkea8db32c7e69da0830c942843d97c5a8f24216 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0526 21:23:18.319431  527485 certs.go:305] copying /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/multinode-20210526212238-510955/apiserver.crt.24f4b2b2 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/multinode-20210526212238-510955/apiserver.crt
	I0526 21:23:18.319484  527485 certs.go:309] copying /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/multinode-20210526212238-510955/apiserver.key.24f4b2b2 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/multinode-20210526212238-510955/apiserver.key
	I0526 21:23:18.319536  527485 certs.go:294] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/multinode-20210526212238-510955/proxy-client.key
	I0526 21:23:18.319558  527485 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/multinode-20210526212238-510955/proxy-client.crt with IP's: []
	I0526 21:23:18.451761  527485 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/multinode-20210526212238-510955/proxy-client.crt ...
	I0526 21:23:18.451785  527485 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/multinode-20210526212238-510955/proxy-client.crt: {Name:mk351ac3702144d65129d3ce5ad96c8410dc8c78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0526 21:23:18.451911  527485 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/multinode-20210526212238-510955/proxy-client.key ...
	I0526 21:23:18.451922  527485 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/multinode-20210526212238-510955/proxy-client.key: {Name:mk1619fd287078bac03af9aec2063e21580ea46d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0526 21:23:18.452000  527485 vm_assets.go:98] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/multinode-20210526212238-510955/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0526 21:23:18.452022  527485 vm_assets.go:98] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/multinode-20210526212238-510955/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0526 21:23:18.452035  527485 vm_assets.go:98] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/multinode-20210526212238-510955/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0526 21:23:18.452045  527485 vm_assets.go:98] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/multinode-20210526212238-510955/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0526 21:23:18.452057  527485 vm_assets.go:98] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0526 21:23:18.452072  527485 vm_assets.go:98] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0526 21:23:18.452084  527485 vm_assets.go:98] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0526 21:23:18.452096  527485 vm_assets.go:98] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0526 21:23:18.452143  527485 certs.go:369] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/510955.pem (1338 bytes)
	W0526 21:23:18.452176  527485 certs.go:365] ignoring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/510955_empty.pem, impossibly tiny 0 bytes
	I0526 21:23:18.452186  527485 certs.go:369] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/ca-key.pem (1675 bytes)
	I0526 21:23:18.452207  527485 certs.go:369] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/ca.pem (1078 bytes)
	I0526 21:23:18.452229  527485 certs.go:369] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/cert.pem (1123 bytes)
	I0526 21:23:18.452252  527485 certs.go:369] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/key.pem (1679 bytes)
	I0526 21:23:18.452281  527485 vm_assets.go:98] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0526 21:23:18.452298  527485 vm_assets.go:98] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/510955.pem -> /usr/share/ca-certificates/510955.pem
	I0526 21:23:18.453162  527485 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/multinode-20210526212238-510955/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0526 21:23:18.471582  527485 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/multinode-20210526212238-510955/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0526 21:23:18.488065  527485 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/multinode-20210526212238-510955/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0526 21:23:18.504684  527485 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/multinode-20210526212238-510955/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0526 21:23:18.520451  527485 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0526 21:23:18.536643  527485 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0526 21:23:18.552821  527485 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0526 21:23:18.569056  527485 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0526 21:23:18.584573  527485 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0526 21:23:18.600174  527485 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/510955.pem --> /usr/share/ca-certificates/510955.pem (1338 bytes)
	I0526 21:23:18.615487  527485 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0526 21:23:18.626836  527485 ssh_runner.go:149] Run: openssl version
	I0526 21:23:18.632434  527485 command_runner.go:124] > OpenSSL 1.1.1k  25 Mar 2021
	I0526 21:23:18.632493  527485 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0526 21:23:18.639790  527485 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0526 21:23:18.644120  527485 command_runner.go:124] > -rw-r--r-- 1 root root 1111 May 26 20:40 /usr/share/ca-certificates/minikubeCA.pem
	I0526 21:23:18.644156  527485 certs.go:410] hashing: -rw-r--r-- 1 root root 1111 May 26 20:40 /usr/share/ca-certificates/minikubeCA.pem
	I0526 21:23:18.644186  527485 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0526 21:23:18.649647  527485 command_runner.go:124] > b5213941
	I0526 21:23:18.649727  527485 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0526 21:23:18.657004  527485 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/510955.pem && ln -fs /usr/share/ca-certificates/510955.pem /etc/ssl/certs/510955.pem"
	I0526 21:23:18.665255  527485 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/510955.pem
	I0526 21:23:18.669430  527485 command_runner.go:124] > -rw-r--r-- 1 root root 1338 May 26 21:12 /usr/share/ca-certificates/510955.pem
	I0526 21:23:18.669745  527485 certs.go:410] hashing: -rw-r--r-- 1 root root 1338 May 26 21:12 /usr/share/ca-certificates/510955.pem
	I0526 21:23:18.669782  527485 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/510955.pem
	I0526 21:23:18.675076  527485 command_runner.go:124] > 51391683
	I0526 21:23:18.675278  527485 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/510955.pem /etc/ssl/certs/51391683.0"
	I0526 21:23:18.682633  527485 kubeadm.go:390] StartCluster: {Name:multinode-20210526212238-510955 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.20.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.22-1620785771-11384@sha256:f5844fe35994179bbad8dda27d4912304a2fedccdf0bf93ce8b2ec2b3b83af1c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:multinode-202105
26212238-510955 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.229 Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:true}
	I0526 21:23:18.682698  527485 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0526 21:23:18.682731  527485 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0526 21:23:18.704402  527485 cri.go:76] found id: ""
	I0526 21:23:18.704439  527485 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0526 21:23:18.712334  527485 command_runner.go:124] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0526 21:23:18.712354  527485 command_runner.go:124] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0526 21:23:18.712361  527485 command_runner.go:124] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0526 21:23:18.712683  527485 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0526 21:23:18.719494  527485 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0526 21:23:18.726749  527485 command_runner.go:124] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0526 21:23:18.726770  527485 command_runner.go:124] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0526 21:23:18.726782  527485 command_runner.go:124] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0526 21:23:18.726800  527485 command_runner.go:124] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0526 21:23:18.727063  527485 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0526 21:23:18.727086  527485 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem"
	I0526 21:23:18.870427  527485 command_runner.go:124] > [init] Using Kubernetes version: v1.20.2
	I0526 21:23:18.870523  527485 command_runner.go:124] > [preflight] Running pre-flight checks
	I0526 21:23:19.181632  527485 command_runner.go:124] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0526 21:23:19.181802  527485 command_runner.go:124] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0526 21:23:19.181923  527485 command_runner.go:124] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0526 21:23:19.285040  527485 out.go:197]   - Generating certificates and keys ...
	I0526 21:23:19.282935  527485 command_runner.go:124] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0526 21:23:19.285188  527485 command_runner.go:124] > [certs] Using existing ca certificate authority
	I0526 21:23:19.285310  527485 command_runner.go:124] > [certs] Using existing apiserver certificate and key on disk
	I0526 21:23:19.412102  527485 command_runner.go:124] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0526 21:23:19.592359  527485 command_runner.go:124] > [certs] Generating "front-proxy-ca" certificate and key
	I0526 21:23:19.823466  527485 command_runner.go:124] > [certs] Generating "front-proxy-client" certificate and key
	I0526 21:23:20.134473  527485 command_runner.go:124] > [certs] Generating "etcd/ca" certificate and key
	I0526 21:23:20.238455  527485 command_runner.go:124] > [certs] Generating "etcd/server" certificate and key
	I0526 21:23:20.238645  527485 command_runner.go:124] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-20210526212238-510955] and IPs [192.168.39.229 127.0.0.1 ::1]
	I0526 21:23:20.610159  527485 command_runner.go:124] > [certs] Generating "etcd/peer" certificate and key
	I0526 21:23:20.610354  527485 command_runner.go:124] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-20210526212238-510955] and IPs [192.168.39.229 127.0.0.1 ::1]
	I0526 21:23:20.699903  527485 command_runner.go:124] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0526 21:23:20.838222  527485 command_runner.go:124] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0526 21:23:20.943728  527485 command_runner.go:124] > [certs] Generating "sa" key and public key
	I0526 21:23:20.943984  527485 command_runner.go:124] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0526 21:23:21.116330  527485 command_runner.go:124] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0526 21:23:21.269108  527485 command_runner.go:124] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0526 21:23:21.477568  527485 command_runner.go:124] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0526 21:23:21.664768  527485 command_runner.go:124] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0526 21:23:21.681316  527485 command_runner.go:124] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0526 21:23:21.681810  527485 command_runner.go:124] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0526 21:23:21.681889  527485 command_runner.go:124] > [kubelet-start] Starting the kubelet
	I0526 21:23:21.832843  527485 out.go:197]   - Booting up control plane ...
	I0526 21:23:21.830617  527485 command_runner.go:124] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0526 21:23:21.833002  527485 command_runner.go:124] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0526 21:23:21.835950  527485 command_runner.go:124] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0526 21:23:21.836999  527485 command_runner.go:124] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0526 21:23:21.838913  527485 command_runner.go:124] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0526 21:23:21.849864  527485 command_runner.go:124] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0526 21:23:36.852882  527485 command_runner.go:124] > [apiclient] All control plane components are healthy after 15.004823 seconds
	I0526 21:23:36.852998  527485 command_runner.go:124] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0526 21:23:36.890092  527485 command_runner.go:124] > [kubelet] Creating a ConfigMap "kubelet-config-1.20" in namespace kube-system with the configuration for the kubelets in the cluster
	I0526 21:23:37.424532  527485 command_runner.go:124] > [upload-certs] Skipping phase. Please see --upload-certs
	I0526 21:23:37.424782  527485 command_runner.go:124] > [mark-control-plane] Marking the node multinode-20210526212238-510955 as control-plane by adding the labels "node-role.kubernetes.io/master=''" and "node-role.kubernetes.io/control-plane='' (deprecated)"
	I0526 21:23:37.947722  527485 out.go:197]   - Configuring RBAC rules ...
	I0526 21:23:37.943949  527485 command_runner.go:124] > [bootstrap-token] Using token: 219e67.wy9tafwbla0sc2zj
	I0526 21:23:37.947854  527485 command_runner.go:124] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0526 21:23:37.959349  527485 command_runner.go:124] > [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0526 21:23:37.971621  527485 command_runner.go:124] > [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0526 21:23:37.976035  527485 command_runner.go:124] > [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0526 21:23:37.981660  527485 command_runner.go:124] > [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0526 21:23:37.989573  527485 command_runner.go:124] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0526 21:23:38.020400  527485 command_runner.go:124] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0526 21:23:38.407457  527485 command_runner.go:124] > [addons] Applied essential addon: CoreDNS
	I0526 21:23:38.471668  527485 command_runner.go:124] > [addons] Applied essential addon: kube-proxy
	I0526 21:23:38.473019  527485 command_runner.go:124] > Your Kubernetes control-plane has initialized successfully!
	I0526 21:23:38.473115  527485 command_runner.go:124] > To start using your cluster, you need to run the following as a regular user:
	I0526 21:23:38.473152  527485 command_runner.go:124] >   mkdir -p $HOME/.kube
	I0526 21:23:38.473219  527485 command_runner.go:124] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0526 21:23:38.473290  527485 command_runner.go:124] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0526 21:23:38.473370  527485 command_runner.go:124] > Alternatively, if you are the root user, you can run:
	I0526 21:23:38.473469  527485 command_runner.go:124] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0526 21:23:38.473557  527485 command_runner.go:124] > You should now deploy a pod network to the cluster.
	I0526 21:23:38.473656  527485 command_runner.go:124] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0526 21:23:38.473751  527485 command_runner.go:124] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0526 21:23:38.473857  527485 command_runner.go:124] > You can now join any number of control-plane nodes by copying certificate authorities
	I0526 21:23:38.473954  527485 command_runner.go:124] > and service account keys on each node and then running the following as root:
	I0526 21:23:38.474061  527485 command_runner.go:124] >   kubeadm join control-plane.minikube.internal:8443 --token 219e67.wy9tafwbla0sc2zj \
	I0526 21:23:38.474204  527485 command_runner.go:124] >     --discovery-token-ca-cert-hash sha256:12858510f46d14420576d9acdde7779529e8255fb2d74cf18105715622c3cace \
	I0526 21:23:38.474247  527485 command_runner.go:124] >     --control-plane 
	I0526 21:23:38.474373  527485 command_runner.go:124] > Then you can join any number of worker nodes by running the following on each as root:
	I0526 21:23:38.474486  527485 command_runner.go:124] > kubeadm join control-plane.minikube.internal:8443 --token 219e67.wy9tafwbla0sc2zj \
	I0526 21:23:38.474618  527485 command_runner.go:124] >     --discovery-token-ca-cert-hash sha256:12858510f46d14420576d9acdde7779529e8255fb2d74cf18105715622c3cace 
	I0526 21:23:38.475801  527485 command_runner.go:124] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0526 21:23:38.475832  527485 cni.go:93] Creating CNI manager for ""
	I0526 21:23:38.475841  527485 cni.go:154] 1 nodes found, recommending kindnet
	I0526 21:23:38.477670  527485 out.go:170] * Configuring CNI (Container Networking Interface) ...
	I0526 21:23:38.477738  527485 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0526 21:23:38.483656  527485 command_runner.go:124] >   File: /opt/cni/bin/portmap
	I0526 21:23:38.483677  527485 command_runner.go:124] >   Size: 2849304   	Blocks: 5568       IO Block: 4096   regular file
	I0526 21:23:38.483687  527485 command_runner.go:124] > Device: 10h/16d	Inode: 23213       Links: 1
	I0526 21:23:38.483697  527485 command_runner.go:124] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0526 21:23:38.483705  527485 command_runner.go:124] > Access: 2021-05-26 21:22:53.150354389 +0000
	I0526 21:23:38.483715  527485 command_runner.go:124] > Modify: 2021-05-05 21:33:55.000000000 +0000
	I0526 21:23:38.483722  527485 command_runner.go:124] > Change: 2021-05-26 21:22:48.920437741 +0000
	I0526 21:23:38.483729  527485 command_runner.go:124] >  Birth: -
	I0526 21:23:38.483805  527485 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.20.2/kubectl ...
	I0526 21:23:38.483820  527485 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0526 21:23:38.502629  527485 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0526 21:23:38.941773  527485 command_runner.go:124] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0526 21:23:38.948736  527485 command_runner.go:124] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0526 21:23:38.957671  527485 command_runner.go:124] > serviceaccount/kindnet created
	I0526 21:23:38.973203  527485 command_runner.go:124] > daemonset.apps/kindnet created
	I0526 21:23:38.975433  527485 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0526 21:23:38.975491  527485 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0526 21:23:38.975514  527485 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl label nodes minikube.k8s.io/version=v1.20.0 minikube.k8s.io/commit=1440f8d7119ca73787e7dc88324b0d13449454ff minikube.k8s.io/name=multinode-20210526212238-510955 minikube.k8s.io/updated_at=2021_05_26T21_23_38_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0526 21:23:39.150616  527485 command_runner.go:124] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0526 21:23:39.154017  527485 command_runner.go:124] > -16
	I0526 21:23:39.154057  527485 ops.go:34] apiserver oom_adj: -16
	I0526 21:23:39.154092  527485 command_runner.go:124] > node/multinode-20210526212238-510955 labeled
	I0526 21:23:39.154410  527485 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0526 21:23:39.277118  527485 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0526 21:23:39.778285  527485 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0526 21:23:39.888574  527485 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0526 21:23:40.278401  527485 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0526 21:23:40.377727  527485 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0526 21:23:40.778027  527485 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0526 21:23:40.874277  527485 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0526 21:23:41.278416  527485 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0526 21:23:41.387407  527485 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0526 21:23:41.778275  527485 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0526 21:23:41.899349  527485 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0526 21:23:42.277987  527485 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0526 21:23:42.370237  527485 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0526 21:23:42.778358  527485 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0526 21:23:42.885788  527485 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0526 21:23:43.277993  527485 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0526 21:23:43.376936  527485 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0526 21:23:43.778336  527485 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0526 21:23:43.875676  527485 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0526 21:23:44.277650  527485 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0526 21:23:44.374442  527485 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0526 21:23:44.778675  527485 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0526 21:23:44.890116  527485 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0526 21:23:45.278626  527485 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0526 21:23:45.378443  527485 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0526 21:23:45.777968  527485 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0526 21:23:45.877181  527485 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0526 21:23:46.277884  527485 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0526 21:23:46.383710  527485 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0526 21:23:46.778571  527485 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0526 21:23:46.882908  527485 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0526 21:23:47.277946  527485 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0526 21:23:47.371861  527485 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0526 21:23:47.777746  527485 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0526 21:23:47.879722  527485 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0526 21:23:48.278491  527485 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0526 21:23:48.377059  527485 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0526 21:23:48.778243  527485 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0526 21:23:48.878796  527485 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0526 21:23:49.278610  527485 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0526 21:23:49.381977  527485 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0526 21:23:49.778335  527485 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0526 21:23:49.881825  527485 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0526 21:23:50.278408  527485 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0526 21:23:50.373508  527485 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0526 21:23:50.778066  527485 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0526 21:23:50.883670  527485 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0526 21:23:51.277752  527485 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0526 21:23:51.379649  527485 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0526 21:23:51.777682  527485 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0526 21:23:51.879720  527485 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0526 21:23:52.278027  527485 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0526 21:23:52.456170  527485 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0526 21:23:52.777962  527485 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0526 21:23:52.901956  527485 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0526 21:23:53.278228  527485 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0526 21:23:53.645830  527485 command_runner.go:124] > NAME      SECRETS   AGE
	I0526 21:23:53.645853  527485 command_runner.go:124] > default   0         0s
	I0526 21:23:53.648299  527485 kubeadm.go:985] duration metric: took 14.672869734s to wait for elevateKubeSystemPrivileges.
	I0526 21:23:53.648338  527485 kubeadm.go:392] StartCluster complete in 34.965708295s
	I0526 21:23:53.648363  527485 settings.go:142] acquiring lock: {Name:mkb47980bcf6470cf1fcb3a16dfb83321726bd1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0526 21:23:53.648516  527485 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/kubeconfig
	I0526 21:23:53.650210  527485 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/kubeconfig: {Name:mk1cc7fc8b8e5fab9f3b22f1113879e2241e6726 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0526 21:23:53.651164  527485 loader.go:379] Config loaded from file:  /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/kubeconfig
	I0526 21:23:53.651914  527485 kapi.go:59] client config for multinode-20210526212238-510955: &rest.Config{Host:"https://192.168.39.229:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/multinode-20210526212238-510955/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/multinode-2
0210526212238-510955/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16ac600), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0526 21:23:53.653003  527485 cert_rotation.go:137] Starting client certificate rotation controller
	I0526 21:23:53.654302  527485 round_trippers.go:422] GET https://192.168.39.229:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0526 21:23:53.654317  527485 round_trippers.go:429] Request Headers:
	I0526 21:23:53.654322  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:23:53.654326  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:23:53.668197  527485 round_trippers.go:448] Response Status: 200 OK in 13 milliseconds
	I0526 21:23:53.668217  527485 round_trippers.go:451] Response Headers:
	I0526 21:23:53.668223  527485 round_trippers.go:454]     Content-Length: 291
	I0526 21:23:53.668227  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:23:53 GMT
	I0526 21:23:53.668230  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:23:53.668233  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:23:53.668236  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:23:53.668239  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:23:53.668259  527485 request.go:1107] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"61b51d6d-f826-4099-baa3-75992beb1d32","resourceVersion":"397","creationTimestamp":"2021-05-26T21:23:38Z"},"spec":{"replicas":2},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0526 21:23:53.668845  527485 request.go:1107] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"61b51d6d-f826-4099-baa3-75992beb1d32","resourceVersion":"397","creationTimestamp":"2021-05-26T21:23:38Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0526 21:23:53.668903  527485 round_trippers.go:422] PUT https://192.168.39.229:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0526 21:23:53.668909  527485 round_trippers.go:429] Request Headers:
	I0526 21:23:53.668913  527485 round_trippers.go:433]     Content-Type: application/json
	I0526 21:23:53.668917  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:23:53.668921  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:23:53.678955  527485 round_trippers.go:448] Response Status: 200 OK in 10 milliseconds
	I0526 21:23:53.678975  527485 round_trippers.go:451] Response Headers:
	I0526 21:23:53.678979  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:23:53 GMT
	I0526 21:23:53.678984  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:23:53.678988  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:23:53.678992  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:23:53.678997  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:23:53.679001  527485 round_trippers.go:454]     Content-Length: 291
	I0526 21:23:53.679021  527485 request.go:1107] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"61b51d6d-f826-4099-baa3-75992beb1d32","resourceVersion":"429","creationTimestamp":"2021-05-26T21:23:38Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0526 21:23:54.179482  527485 round_trippers.go:422] GET https://192.168.39.229:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0526 21:23:54.179516  527485 round_trippers.go:429] Request Headers:
	I0526 21:23:54.179522  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:23:54.179527  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:23:54.182547  527485 round_trippers.go:448] Response Status: 200 OK in 3 milliseconds
	I0526 21:23:54.182569  527485 round_trippers.go:451] Response Headers:
	I0526 21:23:54.182575  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:23:54.182582  527485 round_trippers.go:454]     Content-Length: 291
	I0526 21:23:54.182586  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:23:54 GMT
	I0526 21:23:54.182591  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:23:54.182595  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:23:54.182599  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:23:54.182625  527485 request.go:1107] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"61b51d6d-f826-4099-baa3-75992beb1d32","resourceVersion":"445","creationTimestamp":"2021-05-26T21:23:38Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0526 21:23:54.182752  527485 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "multinode-20210526212238-510955" rescaled to 1
	I0526 21:23:54.182785  527485 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0526 21:23:54.358624  527485 command_runner.go:124] > apiVersion: v1
	I0526 21:23:54.358651  527485 command_runner.go:124] > data:
	I0526 21:23:54.358658  527485 command_runner.go:124] >   Corefile: |
	I0526 21:23:54.358664  527485 command_runner.go:124] >     .:53 {
	I0526 21:23:54.358669  527485 command_runner.go:124] >         errors
	I0526 21:23:54.358679  527485 command_runner.go:124] >         health {
	I0526 21:23:54.358686  527485 command_runner.go:124] >            lameduck 5s
	I0526 21:23:54.358692  527485 command_runner.go:124] >         }
	I0526 21:23:54.358698  527485 command_runner.go:124] >         ready
	I0526 21:23:54.358709  527485 command_runner.go:124] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0526 21:23:54.358720  527485 command_runner.go:124] >            pods insecure
	I0526 21:23:54.358728  527485 command_runner.go:124] >            fallthrough in-addr.arpa ip6.arpa
	I0526 21:23:54.358739  527485 command_runner.go:124] >            ttl 30
	I0526 21:23:54.358746  527485 command_runner.go:124] >         }
	I0526 21:23:54.358753  527485 command_runner.go:124] >         prometheus :9153
	I0526 21:23:54.358763  527485 command_runner.go:124] >         forward . /etc/resolv.conf {
	I0526 21:23:54.358771  527485 command_runner.go:124] >            max_concurrent 1000
	I0526 21:23:54.358780  527485 command_runner.go:124] >         }
	I0526 21:23:54.358786  527485 command_runner.go:124] >         cache 30
	I0526 21:23:54.358795  527485 command_runner.go:124] >         loop
	I0526 21:23:54.358801  527485 command_runner.go:124] >         reload
	I0526 21:23:54.358807  527485 command_runner.go:124] >         loadbalance
	I0526 21:23:54.358812  527485 command_runner.go:124] >     }
	I0526 21:23:54.358819  527485 command_runner.go:124] > kind: ConfigMap
	I0526 21:23:54.358823  527485 command_runner.go:124] > metadata:
	I0526 21:23:54.358853  527485 command_runner.go:124] >   creationTimestamp: "2021-05-26T21:23:38Z"
	I0526 21:23:54.358866  527485 command_runner.go:124] >   managedFields:
	I0526 21:23:54.358875  527485 command_runner.go:124] >   - apiVersion: v1
	I0526 21:23:54.358883  527485 command_runner.go:124] >     fieldsType: FieldsV1
	I0526 21:23:54.358889  527485 command_runner.go:124] >     fieldsV1:
	I0526 21:23:54.358895  527485 command_runner.go:124] >       f:data:
	I0526 21:23:54.358901  527485 command_runner.go:124] >         .: {}
	I0526 21:23:54.358908  527485 command_runner.go:124] >         f:Corefile: {}
	I0526 21:23:54.358916  527485 command_runner.go:124] >     manager: kubeadm
	I0526 21:23:54.358923  527485 command_runner.go:124] >     operation: Update
	I0526 21:23:54.358931  527485 command_runner.go:124] >     time: "2021-05-26T21:23:38Z"
	I0526 21:23:54.358939  527485 command_runner.go:124] >   name: coredns
	I0526 21:23:54.358945  527485 command_runner.go:124] >   namespace: kube-system
	I0526 21:23:54.358953  527485 command_runner.go:124] >   resourceVersion: "260"
	I0526 21:23:54.358961  527485 command_runner.go:124] >   uid: e702ca9d-bb73-430c-8447-a824f2271d73
	I0526 21:23:54.360403  527485 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.20.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0526 21:23:54.805623  527485 command_runner.go:124] > configmap/coredns replaced
	I0526 21:23:54.808040  527485 start.go:720] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS
	I0526 21:23:54.808095  527485 start.go:209] Will wait 6m0s for node &{Name: IP:192.168.39.229 Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}
	I0526 21:23:54.810063  527485 out.go:170] * Verifying Kubernetes components...
	I0526 21:23:54.810129  527485 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0526 21:23:54.808149  527485 addons.go:335] enableAddons start: toEnable=map[], additional=[]
	I0526 21:23:54.808458  527485 cache.go:108] acquiring lock: {Name:mk0fbd6526c48f14b253d250dd93663316e68dc6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0526 21:23:54.810241  527485 addons.go:55] Setting default-storageclass=true in profile "multinode-20210526212238-510955"
	I0526 21:23:54.810268  527485 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-20210526212238-510955"
	I0526 21:23:54.810241  527485 addons.go:55] Setting storage-provisioner=true in profile "multinode-20210526212238-510955"
	I0526 21:23:54.810392  527485 addons.go:131] Setting addon storage-provisioner=true in "multinode-20210526212238-510955"
	W0526 21:23:54.810415  527485 addons.go:140] addon storage-provisioner should already be in state true
	I0526 21:23:54.810344  527485 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/minikube-local-cache-test_functional-20210526211257-510955 exists
	I0526 21:23:54.810457  527485 host.go:66] Checking if "multinode-20210526212238-510955" exists ...
	I0526 21:23:54.810470  527485 cache.go:97] cache image "minikube-local-cache-test:functional-20210526211257-510955" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/minikube-local-cache-test_functional-20210526211257-510955" took 2.019275ms
	I0526 21:23:54.810490  527485 cache.go:81] save to tar file minikube-local-cache-test:functional-20210526211257-510955 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/minikube-local-cache-test_functional-20210526211257-510955 succeeded
	I0526 21:23:54.810502  527485 cache.go:88] Successfully saved all images to host disk.
	I0526 21:23:54.810813  527485 main.go:128] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0526 21:23:54.810857  527485 main.go:128] libmachine: Launching plugin server for driver kvm2
	I0526 21:23:54.810927  527485 main.go:128] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0526 21:23:54.810967  527485 main.go:128] libmachine: Launching plugin server for driver kvm2
	I0526 21:23:54.811052  527485 main.go:128] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0526 21:23:54.811090  527485 main.go:128] libmachine: Launching plugin server for driver kvm2
	I0526 21:23:54.822544  527485 main.go:128] libmachine: Plugin server listening at address 127.0.0.1:34083
	I0526 21:23:54.823012  527485 main.go:128] libmachine: () Calling .GetVersion
	I0526 21:23:54.823693  527485 main.go:128] libmachine: Using API Version  1
	I0526 21:23:54.823729  527485 main.go:128] libmachine: () Calling .SetConfigRaw
	I0526 21:23:54.824104  527485 main.go:128] libmachine: () Calling .GetMachineName
	I0526 21:23:54.824291  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetState
	I0526 21:23:54.824813  527485 loader.go:379] Config loaded from file:  /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/kubeconfig
	I0526 21:23:54.825461  527485 kapi.go:59] client config for multinode-20210526212238-510955: &rest.Config{Host:"https://192.168.39.229:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/multinode-20210526212238-510955/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/multinode-2
0210526212238-510955/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16ac600), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0526 21:23:54.826971  527485 node_ready.go:35] waiting up to 6m0s for node "multinode-20210526212238-510955" to be "Ready" ...
	I0526 21:23:54.827050  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:23:54.827062  527485 round_trippers.go:429] Request Headers:
	I0526 21:23:54.827069  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:23:54.827077  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:23:54.828220  527485 main.go:128] libmachine: Plugin server listening at address 127.0.0.1:36233
	I0526 21:23:54.828510  527485 loader.go:379] Config loaded from file:  /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/kubeconfig
	I0526 21:23:54.828544  527485 main.go:128] libmachine: () Calling .GetVersion
	I0526 21:23:54.828965  527485 main.go:128] libmachine: Using API Version  1
	I0526 21:23:54.828983  527485 main.go:128] libmachine: () Calling .SetConfigRaw
	I0526 21:23:54.829006  527485 kapi.go:59] client config for multinode-20210526212238-510955: &rest.Config{Host:"https://192.168.39.229:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/multinode-20210526212238-510955/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/multinode-2
0210526212238-510955/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16ac600), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0526 21:23:54.829501  527485 main.go:128] libmachine: () Calling .GetMachineName
	I0526 21:23:54.829930  527485 main.go:128] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0526 21:23:54.829960  527485 main.go:128] libmachine: Launching plugin server for driver kvm2
	I0526 21:23:54.830501  527485 round_trippers.go:422] GET https://192.168.39.229:8443/apis/storage.k8s.io/v1/storageclasses
	I0526 21:23:54.830514  527485 round_trippers.go:429] Request Headers:
	I0526 21:23:54.830520  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:23:54.830524  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:23:54.831071  527485 round_trippers.go:448] Response Status: 200 OK in 3 milliseconds
	I0526 21:23:54.831089  527485 round_trippers.go:451] Response Headers:
	I0526 21:23:54.831095  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:23:54.831100  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:23:54.831105  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:23:54.831109  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:23:54 GMT
	I0526 21:23:54.831114  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:23:54.831646  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"402","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6178 chars]
	I0526 21:23:54.838612  527485 round_trippers.go:448] Response Status: 200 OK in 8 milliseconds
	I0526 21:23:54.838627  527485 round_trippers.go:451] Response Headers:
	I0526 21:23:54.838632  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:23:54.838637  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:23:54.838641  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:23:54.838645  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:23:54.838650  527485 round_trippers.go:454]     Content-Length: 109
	I0526 21:23:54.838654  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:23:54 GMT
	I0526 21:23:54.838670  527485 request.go:1107] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"452"},"items":[]}
	I0526 21:23:54.838945  527485 main.go:128] libmachine: Plugin server listening at address 127.0.0.1:41279
	I0526 21:23:54.839318  527485 main.go:128] libmachine: () Calling .GetVersion
	I0526 21:23:54.839433  527485 addons.go:131] Setting addon default-storageclass=true in "multinode-20210526212238-510955"
	W0526 21:23:54.839448  527485 addons.go:140] addon default-storageclass should already be in state true
	I0526 21:23:54.839473  527485 host.go:66] Checking if "multinode-20210526212238-510955" exists ...
	I0526 21:23:54.839790  527485 main.go:128] libmachine: Using API Version  1
	I0526 21:23:54.839809  527485 main.go:128] libmachine: () Calling .SetConfigRaw
	I0526 21:23:54.839856  527485 main.go:128] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0526 21:23:54.839888  527485 main.go:128] libmachine: Launching plugin server for driver kvm2
	I0526 21:23:54.840165  527485 main.go:128] libmachine: () Calling .GetMachineName
	I0526 21:23:54.840362  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetState
	I0526 21:23:54.840517  527485 main.go:128] libmachine: Plugin server listening at address 127.0.0.1:36599
	I0526 21:23:54.841183  527485 main.go:128] libmachine: () Calling .GetVersion
	I0526 21:23:54.841701  527485 main.go:128] libmachine: Using API Version  1
	I0526 21:23:54.841725  527485 main.go:128] libmachine: () Calling .SetConfigRaw
	I0526 21:23:54.842084  527485 main.go:128] libmachine: () Calling .GetMachineName
	I0526 21:23:54.842264  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetState
	I0526 21:23:54.844326  527485 main.go:128] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0526 21:23:54.844368  527485 main.go:128] libmachine: Launching plugin server for driver kvm2
	I0526 21:23:54.845563  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .DriverName
	I0526 21:23:54.847895  527485 out.go:170]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0526 21:23:54.848006  527485 addons.go:268] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0526 21:23:54.848021  527485 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0526 21:23:54.848041  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetSSHHostname
	I0526 21:23:54.851917  527485 main.go:128] libmachine: Plugin server listening at address 127.0.0.1:36383
	I0526 21:23:54.852300  527485 main.go:128] libmachine: () Calling .GetVersion
	I0526 21:23:54.852732  527485 main.go:128] libmachine: Using API Version  1
	I0526 21:23:54.852754  527485 main.go:128] libmachine: () Calling .SetConfigRaw
	I0526 21:23:54.853135  527485 main.go:128] libmachine: () Calling .GetMachineName
	I0526 21:23:54.853629  527485 main.go:128] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0526 21:23:54.853669  527485 main.go:128] libmachine: Launching plugin server for driver kvm2
	I0526 21:23:54.853819  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | domain multinode-20210526212238-510955 has defined MAC address 52:54:00:0c:8b:34 in network mk-multinode-20210526212238-510955
	I0526 21:23:54.854262  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:8b:34", ip: ""} in network mk-multinode-20210526212238-510955: {Iface:virbr1 ExpiryTime:2021-05-26 22:22:53 +0000 UTC Type:0 Mac:52:54:00:0c:8b:34 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:multinode-20210526212238-510955 Clientid:01:52:54:00:0c:8b:34}
	I0526 21:23:54.854293  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | domain multinode-20210526212238-510955 has defined IP address 192.168.39.229 and MAC address 52:54:00:0c:8b:34 in network mk-multinode-20210526212238-510955
	I0526 21:23:54.854386  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetSSHPort
	I0526 21:23:54.854548  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetSSHKeyPath
	I0526 21:23:54.854706  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetSSHUsername
	I0526 21:23:54.854855  527485 sshutil.go:53] new ssh client: &{IP:192.168.39.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/multinode-20210526212238-510955/id_rsa Username:docker}
	I0526 21:23:54.856100  527485 main.go:128] libmachine: Plugin server listening at address 127.0.0.1:41329
	I0526 21:23:54.856458  527485 main.go:128] libmachine: () Calling .GetVersion
	I0526 21:23:54.856902  527485 main.go:128] libmachine: Using API Version  1
	I0526 21:23:54.856930  527485 main.go:128] libmachine: () Calling .SetConfigRaw
	I0526 21:23:54.857237  527485 main.go:128] libmachine: () Calling .GetMachineName
	I0526 21:23:54.857421  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .DriverName
	I0526 21:23:54.857576  527485 ssh_runner.go:149] Run: sudo crictl images --output json
	I0526 21:23:54.857599  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetSSHHostname
	I0526 21:23:54.862859  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | domain multinode-20210526212238-510955 has defined MAC address 52:54:00:0c:8b:34 in network mk-multinode-20210526212238-510955
	I0526 21:23:54.863221  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:8b:34", ip: ""} in network mk-multinode-20210526212238-510955: {Iface:virbr1 ExpiryTime:2021-05-26 22:22:53 +0000 UTC Type:0 Mac:52:54:00:0c:8b:34 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:multinode-20210526212238-510955 Clientid:01:52:54:00:0c:8b:34}
	I0526 21:23:54.863250  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | domain multinode-20210526212238-510955 has defined IP address 192.168.39.229 and MAC address 52:54:00:0c:8b:34 in network mk-multinode-20210526212238-510955
	I0526 21:23:54.863372  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetSSHPort
	I0526 21:23:54.863534  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetSSHKeyPath
	I0526 21:23:54.863692  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetSSHUsername
	I0526 21:23:54.863834  527485 sshutil.go:53] new ssh client: &{IP:192.168.39.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/multinode-20210526212238-510955/id_rsa Username:docker}
	I0526 21:23:54.865144  527485 main.go:128] libmachine: Plugin server listening at address 127.0.0.1:38367
	I0526 21:23:54.865512  527485 main.go:128] libmachine: () Calling .GetVersion
	I0526 21:23:54.865925  527485 main.go:128] libmachine: Using API Version  1
	I0526 21:23:54.865976  527485 main.go:128] libmachine: () Calling .SetConfigRaw
	I0526 21:23:54.866265  527485 main.go:128] libmachine: () Calling .GetMachineName
	I0526 21:23:54.866451  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetState
	I0526 21:23:54.869024  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .DriverName
	I0526 21:23:54.869208  527485 addons.go:268] installing /etc/kubernetes/addons/storageclass.yaml
	I0526 21:23:54.869222  527485 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0526 21:23:54.869235  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetSSHHostname
	I0526 21:23:54.873881  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | domain multinode-20210526212238-510955 has defined MAC address 52:54:00:0c:8b:34 in network mk-multinode-20210526212238-510955
	I0526 21:23:54.874186  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:8b:34", ip: ""} in network mk-multinode-20210526212238-510955: {Iface:virbr1 ExpiryTime:2021-05-26 22:22:53 +0000 UTC Type:0 Mac:52:54:00:0c:8b:34 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:multinode-20210526212238-510955 Clientid:01:52:54:00:0c:8b:34}
	I0526 21:23:54.874206  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | domain multinode-20210526212238-510955 has defined IP address 192.168.39.229 and MAC address 52:54:00:0c:8b:34 in network mk-multinode-20210526212238-510955
	I0526 21:23:54.874371  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetSSHPort
	I0526 21:23:54.874532  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetSSHKeyPath
	I0526 21:23:54.874662  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetSSHUsername
	I0526 21:23:54.874780  527485 sshutil.go:53] new ssh client: &{IP:192.168.39.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/multinode-20210526212238-510955/id_rsa Username:docker}
	I0526 21:23:54.972917  527485 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0526 21:23:55.013015  527485 command_runner.go:124] > {
	I0526 21:23:55.013033  527485 command_runner.go:124] >   "images": [
	I0526 21:23:55.013037  527485 command_runner.go:124] >     {
	I0526 21:23:55.013046  527485 command_runner.go:124] >       "id": "sha256:6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb",
	I0526 21:23:55.013050  527485 command_runner.go:124] >       "repoTags": [
	I0526 21:23:55.013057  527485 command_runner.go:124] >         "docker.io/kindest/kindnetd:v20210326-1e038dc5"
	I0526 21:23:55.013061  527485 command_runner.go:124] >       ],
	I0526 21:23:55.013065  527485 command_runner.go:124] >       "repoDigests": [
	I0526 21:23:55.013074  527485 command_runner.go:124] >         "docker.io/kindest/kindnetd@sha256:838bc1706e38391aefaa31fd52619fe8e57ad3dfb0d0ff414d902367fcc24c3c"
	I0526 21:23:55.013078  527485 command_runner.go:124] >       ],
	I0526 21:23:55.013082  527485 command_runner.go:124] >       "size": "53960776",
	I0526 21:23:55.013087  527485 command_runner.go:124] >       "uid": null,
	I0526 21:23:55.013091  527485 command_runner.go:124] >       "username": "",
	I0526 21:23:55.013097  527485 command_runner.go:124] >       "spec": null
	I0526 21:23:55.013101  527485 command_runner.go:124] >     },
	I0526 21:23:55.013105  527485 command_runner.go:124] >     {
	I0526 21:23:55.013114  527485 command_runner.go:124] >       "id": "sha256:9a07b5b4bfac07e5cfc27f76c34516a3ad2fdfa3f683f375141fe662ef2e72db",
	I0526 21:23:55.013120  527485 command_runner.go:124] >       "repoTags": [
	I0526 21:23:55.013126  527485 command_runner.go:124] >         "docker.io/kubernetesui/dashboard:v2.1.0"
	I0526 21:23:55.013130  527485 command_runner.go:124] >       ],
	I0526 21:23:55.013134  527485 command_runner.go:124] >       "repoDigests": [
	I0526 21:23:55.013142  527485 command_runner.go:124] >         "docker.io/kubernetesui/dashboard@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6"
	I0526 21:23:55.013147  527485 command_runner.go:124] >       ],
	I0526 21:23:55.013151  527485 command_runner.go:124] >       "size": "67992170",
	I0526 21:23:55.013154  527485 command_runner.go:124] >       "uid": null,
	I0526 21:23:55.013158  527485 command_runner.go:124] >       "username": "nonroot",
	I0526 21:23:55.013162  527485 command_runner.go:124] >       "spec": null
	I0526 21:23:55.013166  527485 command_runner.go:124] >     },
	I0526 21:23:55.013172  527485 command_runner.go:124] >     {
	I0526 21:23:55.013179  527485 command_runner.go:124] >       "id": "sha256:86262685d9abb35698a4e03ed13f9ded5b97c6c85b466285e4f367e5232eeee4",
	I0526 21:23:55.013183  527485 command_runner.go:124] >       "repoTags": [
	I0526 21:23:55.013188  527485 command_runner.go:124] >         "docker.io/kubernetesui/metrics-scraper:v1.0.4"
	I0526 21:23:55.013193  527485 command_runner.go:124] >       ],
	I0526 21:23:55.013199  527485 command_runner.go:124] >       "repoDigests": [
	I0526 21:23:55.013207  527485 command_runner.go:124] >         "docker.io/kubernetesui/metrics-scraper@sha256:555981a24f184420f3be0c79d4efb6c948a85cfce84034f85a563f4151a81cbf"
	I0526 21:23:55.013212  527485 command_runner.go:124] >       ],
	I0526 21:23:55.013216  527485 command_runner.go:124] >       "size": "16020077",
	I0526 21:23:55.013220  527485 command_runner.go:124] >       "uid": null,
	I0526 21:23:55.013224  527485 command_runner.go:124] >       "username": "",
	I0526 21:23:55.013227  527485 command_runner.go:124] >       "spec": null
	I0526 21:23:55.013231  527485 command_runner.go:124] >     },
	I0526 21:23:55.013234  527485 command_runner.go:124] >     {
	I0526 21:23:55.013241  527485 command_runner.go:124] >       "id": "sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0526 21:23:55.013245  527485 command_runner.go:124] >       "repoTags": [
	I0526 21:23:55.013251  527485 command_runner.go:124] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0526 21:23:55.013254  527485 command_runner.go:124] >       ],
	I0526 21:23:55.013258  527485 command_runner.go:124] >       "repoDigests": [
	I0526 21:23:55.013266  527485 command_runner.go:124] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I0526 21:23:55.013271  527485 command_runner.go:124] >       ],
	I0526 21:23:55.013275  527485 command_runner.go:124] >       "size": "9058936",
	I0526 21:23:55.013278  527485 command_runner.go:124] >       "uid": null,
	I0526 21:23:55.013282  527485 command_runner.go:124] >       "username": "",
	I0526 21:23:55.013289  527485 command_runner.go:124] >       "spec": null
	I0526 21:23:55.013293  527485 command_runner.go:124] >     },
	I0526 21:23:55.013296  527485 command_runner.go:124] >     {
	I0526 21:23:55.013303  527485 command_runner.go:124] >       "id": "sha256:bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16",
	I0526 21:23:55.013307  527485 command_runner.go:124] >       "repoTags": [
	I0526 21:23:55.013312  527485 command_runner.go:124] >         "k8s.gcr.io/coredns:1.7.0"
	I0526 21:23:55.013318  527485 command_runner.go:124] >       ],
	I0526 21:23:55.013322  527485 command_runner.go:124] >       "repoDigests": [
	I0526 21:23:55.013332  527485 command_runner.go:124] >         "k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c"
	I0526 21:23:55.013341  527485 command_runner.go:124] >       ],
	I0526 21:23:55.013347  527485 command_runner.go:124] >       "size": "13982350",
	I0526 21:23:55.013353  527485 command_runner.go:124] >       "uid": null,
	I0526 21:23:55.013360  527485 command_runner.go:124] >       "username": "",
	I0526 21:23:55.013364  527485 command_runner.go:124] >       "spec": null
	I0526 21:23:55.013368  527485 command_runner.go:124] >     },
	I0526 21:23:55.013378  527485 command_runner.go:124] >     {
	I0526 21:23:55.013386  527485 command_runner.go:124] >       "id": "sha256:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934",
	I0526 21:23:55.013390  527485 command_runner.go:124] >       "repoTags": [
	I0526 21:23:55.013394  527485 command_runner.go:124] >         "k8s.gcr.io/etcd:3.4.13-0"
	I0526 21:23:55.013398  527485 command_runner.go:124] >       ],
	I0526 21:23:55.013402  527485 command_runner.go:124] >       "repoDigests": [
	I0526 21:23:55.013409  527485 command_runner.go:124] >         "k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2"
	I0526 21:23:55.013413  527485 command_runner.go:124] >       ],
	I0526 21:23:55.013418  527485 command_runner.go:124] >       "size": "86742272",
	I0526 21:23:55.013421  527485 command_runner.go:124] >       "uid": null,
	I0526 21:23:55.013425  527485 command_runner.go:124] >       "username": "",
	I0526 21:23:55.013429  527485 command_runner.go:124] >       "spec": null
	I0526 21:23:55.013432  527485 command_runner.go:124] >     },
	I0526 21:23:55.013435  527485 command_runner.go:124] >     {
	I0526 21:23:55.013442  527485 command_runner.go:124] >       "id": "sha256:a8c2fdb8bf76e3b014d14ce69a6a2d11044cb13b4ec3185015c582b8ad69a820",
	I0526 21:23:55.013447  527485 command_runner.go:124] >       "repoTags": [
	I0526 21:23:55.013452  527485 command_runner.go:124] >         "k8s.gcr.io/kube-apiserver:v1.20.2"
	I0526 21:23:55.013455  527485 command_runner.go:124] >       ],
	I0526 21:23:55.013459  527485 command_runner.go:124] >       "repoDigests": [
	I0526 21:23:55.013466  527485 command_runner.go:124] >         "k8s.gcr.io/kube-apiserver@sha256:465ba895d578fbc1c6e299e45689381fd01c54400beba9e8f1d7456077411411"
	I0526 21:23:55.013471  527485 command_runner.go:124] >       ],
	I0526 21:23:55.013475  527485 command_runner.go:124] >       "size": "30411317",
	I0526 21:23:55.013480  527485 command_runner.go:124] >       "uid": {
	I0526 21:23:55.013484  527485 command_runner.go:124] >         "value": "0"
	I0526 21:23:55.013488  527485 command_runner.go:124] >       },
	I0526 21:23:55.013492  527485 command_runner.go:124] >       "username": "",
	I0526 21:23:55.013496  527485 command_runner.go:124] >       "spec": null
	I0526 21:23:55.013499  527485 command_runner.go:124] >     },
	I0526 21:23:55.013502  527485 command_runner.go:124] >     {
	I0526 21:23:55.013509  527485 command_runner.go:124] >       "id": "sha256:a27166429d98e07152ca71420931142127609f715925b1607acee6ea6f0e3696",
	I0526 21:23:55.013515  527485 command_runner.go:124] >       "repoTags": [
	I0526 21:23:55.013520  527485 command_runner.go:124] >         "k8s.gcr.io/kube-controller-manager:v1.20.2"
	I0526 21:23:55.013523  527485 command_runner.go:124] >       ],
	I0526 21:23:55.013529  527485 command_runner.go:124] >       "repoDigests": [
	I0526 21:23:55.013536  527485 command_runner.go:124] >         "k8s.gcr.io/kube-controller-manager@sha256:842a071d4ad49b0018f7f7404ac8a4ddfc2bce2ce15b3f8131d89563fda36c9b"
	I0526 21:23:55.013541  527485 command_runner.go:124] >       ],
	I0526 21:23:55.013546  527485 command_runner.go:124] >       "size": "29362302",
	I0526 21:23:55.013549  527485 command_runner.go:124] >       "uid": {
	I0526 21:23:55.013553  527485 command_runner.go:124] >         "value": "0"
	I0526 21:23:55.013557  527485 command_runner.go:124] >       },
	I0526 21:23:55.013561  527485 command_runner.go:124] >       "username": "",
	I0526 21:23:55.013564  527485 command_runner.go:124] >       "spec": null
	I0526 21:23:55.013568  527485 command_runner.go:124] >     },
	I0526 21:23:55.013571  527485 command_runner.go:124] >     {
	I0526 21:23:55.013578  527485 command_runner.go:124] >       "id": "sha256:43154ddb57a83de3068fe603e9c7393e7d2b77cb18d9e0daf869f74b1b4079c0",
	I0526 21:23:55.013583  527485 command_runner.go:124] >       "repoTags": [
	I0526 21:23:55.013588  527485 command_runner.go:124] >         "k8s.gcr.io/kube-proxy:v1.20.2"
	I0526 21:23:55.013591  527485 command_runner.go:124] >       ],
	I0526 21:23:55.013595  527485 command_runner.go:124] >       "repoDigests": [
	I0526 21:23:55.013602  527485 command_runner.go:124] >         "k8s.gcr.io/kube-proxy@sha256:326fe8a4508a5db91cf234c4867eff5ba458bc4107c2a7e15c827a74faa19be9"
	I0526 21:23:55.013607  527485 command_runner.go:124] >       ],
	I0526 21:23:55.013611  527485 command_runner.go:124] >       "size": "49539606",
	I0526 21:23:55.013615  527485 command_runner.go:124] >       "uid": null,
	I0526 21:23:55.013619  527485 command_runner.go:124] >       "username": "",
	I0526 21:23:55.013622  527485 command_runner.go:124] >       "spec": null
	I0526 21:23:55.013626  527485 command_runner.go:124] >     },
	I0526 21:23:55.013629  527485 command_runner.go:124] >     {
	I0526 21:23:55.013636  527485 command_runner.go:124] >       "id": "sha256:ed2c44fbdd78b69a0981ab3c57ebce2798e4a4b2b5dda2fabc720f9957d4869f",
	I0526 21:23:55.013641  527485 command_runner.go:124] >       "repoTags": [
	I0526 21:23:55.013646  527485 command_runner.go:124] >         "k8s.gcr.io/kube-scheduler:v1.20.2"
	I0526 21:23:55.013649  527485 command_runner.go:124] >       ],
	I0526 21:23:55.013653  527485 command_runner.go:124] >       "repoDigests": [
	I0526 21:23:55.013660  527485 command_runner.go:124] >         "k8s.gcr.io/kube-scheduler@sha256:304b3d70497bd62498f19f82f9ef164d38948e5ae94966690abfe9d1858867e2"
	I0526 21:23:55.013666  527485 command_runner.go:124] >       ],
	I0526 21:23:55.013673  527485 command_runner.go:124] >       "size": "14012937",
	I0526 21:23:55.013676  527485 command_runner.go:124] >       "uid": {
	I0526 21:23:55.013680  527485 command_runner.go:124] >         "value": "0"
	I0526 21:23:55.013684  527485 command_runner.go:124] >       },
	I0526 21:23:55.013690  527485 command_runner.go:124] >       "username": "",
	I0526 21:23:55.013694  527485 command_runner.go:124] >       "spec": null
	I0526 21:23:55.013697  527485 command_runner.go:124] >     },
	I0526 21:23:55.013701  527485 command_runner.go:124] >     {
	I0526 21:23:55.013713  527485 command_runner.go:124] >       "id": "sha256:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c",
	I0526 21:23:55.013720  527485 command_runner.go:124] >       "repoTags": [
	I0526 21:23:55.013724  527485 command_runner.go:124] >         "k8s.gcr.io/pause:3.2"
	I0526 21:23:55.013727  527485 command_runner.go:124] >       ],
	I0526 21:23:55.013731  527485 command_runner.go:124] >       "repoDigests": [
	I0526 21:23:55.013738  527485 command_runner.go:124] >         "k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f"
	I0526 21:23:55.013742  527485 command_runner.go:124] >       ],
	I0526 21:23:55.013746  527485 command_runner.go:124] >       "size": "299513",
	I0526 21:23:55.013750  527485 command_runner.go:124] >       "uid": null,
	I0526 21:23:55.013754  527485 command_runner.go:124] >       "username": "",
	I0526 21:23:55.013759  527485 command_runner.go:124] >       "spec": null
	I0526 21:23:55.013762  527485 command_runner.go:124] >     }
	I0526 21:23:55.013765  527485 command_runner.go:124] >   ]
	I0526 21:23:55.013768  527485 command_runner.go:124] > }
	I0526 21:23:55.013872  527485 containerd.go:566] couldn't find preloaded image for "docker.io/minikube-local-cache-test:functional-20210526211257-510955". assuming images are not preloaded.
	I0526 21:23:55.013886  527485 cache_images.go:78] LoadImages start: [minikube-local-cache-test:functional-20210526211257-510955]
	I0526 21:23:55.013935  527485 image.go:162] retrieving image: minikube-local-cache-test:functional-20210526211257-510955
	I0526 21:23:55.013951  527485 image.go:168] checking repository: index.docker.io/library/minikube-local-cache-test
	I0526 21:23:55.034685  527485 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	W0526 21:23:55.070096  527485 image.go:175] remote: HEAD https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210526211257-510955: unexpected status code 401 Unauthorized (HEAD responses have no body, use GET for details)
	I0526 21:23:55.070120  527485 image.go:176] short name: minikube-local-cache-test:functional-20210526211257-510955
	I0526 21:23:55.071089  527485 image.go:204] daemon lookup for minikube-local-cache-test:functional-20210526211257-510955: Error response from daemon: reference does not exist
	W0526 21:23:55.119209  527485 image.go:214] authn lookup for minikube-local-cache-test:functional-20210526211257-510955 (trying anon): GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210526211257-510955: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]]
	I0526 21:23:55.160352  527485 image.go:218] remote lookup for minikube-local-cache-test:functional-20210526211257-510955: GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210526211257-510955: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]]
	I0526 21:23:55.160396  527485 image.go:95] error retrieve Image minikube-local-cache-test:functional-20210526211257-510955 ref GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210526211257-510955: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]] 
	I0526 21:23:55.160422  527485 cache_images.go:106] "minikube-local-cache-test:functional-20210526211257-510955" needs transfer: got empty img digest "" for minikube-local-cache-test:functional-20210526211257-510955
	I0526 21:23:55.160451  527485 cri.go:205] Removing image: minikube-local-cache-test:functional-20210526211257-510955
	I0526 21:23:55.160496  527485 ssh_runner.go:149] Run: which crictl
	I0526 21:23:55.304580  527485 command_runner.go:124] > serviceaccount/storage-provisioner created
	I0526 21:23:55.310624  527485 command_runner.go:124] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0526 21:23:55.327245  527485 command_runner.go:124] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0526 21:23:55.334462  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:23:55.334481  527485 round_trippers.go:429] Request Headers:
	I0526 21:23:55.334487  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:23:55.334491  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:23:55.338693  527485 round_trippers.go:448] Response Status: 200 OK in 4 milliseconds
	I0526 21:23:55.338713  527485 round_trippers.go:451] Response Headers:
	I0526 21:23:55.338719  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:23:55.338724  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:23:55.338729  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:23:55.338733  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:23:55.338738  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:23:55 GMT
	I0526 21:23:55.338958  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"402","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6178 chars]
	I0526 21:23:55.344583  527485 command_runner.go:124] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0526 21:23:55.370748  527485 command_runner.go:124] > endpoints/k8s.io-minikube-hostpath created
	I0526 21:23:55.388886  527485 command_runner.go:124] > pod/storage-provisioner created
	I0526 21:23:55.396106  527485 main.go:128] libmachine: Making call to close driver server
	I0526 21:23:55.396128  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .Close
	I0526 21:23:55.396382  527485 main.go:128] libmachine: Successfully made call to close driver server
	I0526 21:23:55.396402  527485 main.go:128] libmachine: Making call to close connection to plugin binary
	I0526 21:23:55.396412  527485 main.go:128] libmachine: Making call to close driver server
	I0526 21:23:55.396421  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .Close
	I0526 21:23:55.396691  527485 main.go:128] libmachine: Successfully made call to close driver server
	I0526 21:23:55.396706  527485 main.go:128] libmachine: Making call to close connection to plugin binary
	I0526 21:23:55.482536  527485 command_runner.go:124] > storageclass.storage.k8s.io/standard created
	I0526 21:23:55.482581  527485 main.go:128] libmachine: Making call to close driver server
	I0526 21:23:55.482589  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .Close
	I0526 21:23:55.482617  527485 command_runner.go:124] > /bin/crictl
	I0526 21:23:55.482694  527485 ssh_runner.go:149] Run: sudo /bin/crictl rmi minikube-local-cache-test:functional-20210526211257-510955
	I0526 21:23:55.482863  527485 main.go:128] libmachine: Successfully made call to close driver server
	I0526 21:23:55.482882  527485 main.go:128] libmachine: Making call to close connection to plugin binary
	I0526 21:23:55.482897  527485 main.go:128] libmachine: Making call to close driver server
	I0526 21:23:55.482903  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | Closing plugin on server side
	I0526 21:23:55.482908  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .Close
	I0526 21:23:55.483131  527485 main.go:128] libmachine: Successfully made call to close driver server
	I0526 21:23:55.483149  527485 main.go:128] libmachine: Making call to close connection to plugin binary
	I0526 21:23:55.483160  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | Closing plugin on server side
	I0526 21:23:55.483176  527485 main.go:128] libmachine: Making call to close driver server
	I0526 21:23:55.483190  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .Close
	I0526 21:23:55.483441  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | Closing plugin on server side
	I0526 21:23:55.483466  527485 main.go:128] libmachine: Successfully made call to close driver server
	I0526 21:23:55.483482  527485 main.go:128] libmachine: Making call to close connection to plugin binary
	I0526 21:23:55.485401  527485 out.go:170] * Enabled addons: storage-provisioner, default-storageclass
	I0526 21:23:55.485427  527485 addons.go:337] enableAddons completed in 677.290125ms
	I0526 21:23:55.502787  527485 command_runner.go:124] ! time="2021-05-26T21:23:55Z" level=error msg="no such image minikube-local-cache-test:functional-20210526211257-510955"
	I0526 21:23:55.502807  527485 command_runner.go:124] ! time="2021-05-26T21:23:55Z" level=fatal msg="unable to remove the image(s)"
	I0526 21:23:55.502944  527485 cache_images.go:279] Loading image from: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/minikube-local-cache-test_functional-20210526211257-510955
	I0526 21:23:55.502981  527485 vm_assets.go:98] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/minikube-local-cache-test_functional-20210526211257-510955 -> /var/lib/minikube/images/minikube-local-cache-test_functional-20210526211257-510955
	I0526 21:23:55.503045  527485 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/minikube-local-cache-test_functional-20210526211257-510955
	I0526 21:23:55.507769  527485 command_runner.go:124] ! stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210526211257-510955': No such file or directory
	I0526 21:23:55.507806  527485 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210526211257-510955: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/minikube-local-cache-test_functional-20210526211257-510955: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210526211257-510955': No such file or directory
	I0526 21:23:55.507826  527485 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/minikube-local-cache-test_functional-20210526211257-510955 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210526211257-510955 (5120 bytes)
	I0526 21:23:55.526757  527485 containerd.go:260] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210526211257-510955
	I0526 21:23:55.526799  527485 ssh_runner.go:149] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/minikube-local-cache-test_functional-20210526211257-510955
	I0526 21:23:55.721424  527485 command_runner.go:124] > unpacking docker.io/library/minikube-local-cache-test:functional-20210526211257-510955 (sha256:d8b8bd0a35bb7de49f0a81841d103dd430b2bd6e4ca4d65facee12d3e0605733)...done
	I0526 21:23:55.722976  527485 cache_images.go:308] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/minikube-local-cache-test_functional-20210526211257-510955 from cache
	I0526 21:23:55.723013  527485 cache_images.go:113] Successfully loaded all cached images
	I0526 21:23:55.723024  527485 cache_images.go:82] LoadImages completed in 709.129952ms
	I0526 21:23:55.723038  527485 cache_images.go:252] succeeded pushing to: multinode-20210526212238-510955
	I0526 21:23:55.723045  527485 cache_images.go:253] failed pushing to: 
	I0526 21:23:55.723070  527485 main.go:128] libmachine: Making call to close driver server
	I0526 21:23:55.723087  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .Close
	I0526 21:23:55.723370  527485 main.go:128] libmachine: Successfully made call to close driver server
	I0526 21:23:55.723392  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | Closing plugin on server side
	I0526 21:23:55.723392  527485 main.go:128] libmachine: Making call to close connection to plugin binary
	I0526 21:23:55.723433  527485 main.go:128] libmachine: Making call to close driver server
	I0526 21:23:55.723449  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .Close
	I0526 21:23:55.723664  527485 main.go:128] libmachine: Successfully made call to close driver server
	I0526 21:23:55.723681  527485 main.go:128] libmachine: Making call to close connection to plugin binary
	I0526 21:23:55.723707  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | Closing plugin on server side
	I0526 21:23:55.834205  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:23:55.834222  527485 round_trippers.go:429] Request Headers:
	I0526 21:23:55.834229  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:23:55.834235  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:23:55.836956  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:23:55.836972  527485 round_trippers.go:451] Response Headers:
	I0526 21:23:55.836977  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:23:55 GMT
	I0526 21:23:55.836982  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:23:55.836986  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:23:55.836990  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:23:55.836995  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:23:55.837381  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"402","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6178 chars]
	I0526 21:23:56.334283  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:23:56.334299  527485 round_trippers.go:429] Request Headers:
	I0526 21:23:56.334304  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:23:56.334308  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:23:56.337012  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:23:56.337033  527485 round_trippers.go:451] Response Headers:
	I0526 21:23:56.337040  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:23:56.337045  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:23:56 GMT
	I0526 21:23:56.337050  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:23:56.337054  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:23:56.337057  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:23:56.337340  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"402","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6178 chars]
	I0526 21:23:56.834186  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:23:56.834209  527485 round_trippers.go:429] Request Headers:
	I0526 21:23:56.834215  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:23:56.834219  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:23:56.836716  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:23:56.836732  527485 round_trippers.go:451] Response Headers:
	I0526 21:23:56.836735  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:23:56.836739  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:23:56.836742  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:23:56.836745  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:23:56.836748  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:23:56 GMT
	I0526 21:23:56.837308  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"402","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6178 chars]
	I0526 21:23:56.837545  527485 node_ready.go:58] node "multinode-20210526212238-510955" has status "Ready":"False"
	I0526 21:23:57.334234  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:23:57.334265  527485 round_trippers.go:429] Request Headers:
	I0526 21:23:57.334273  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:23:57.334278  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:23:57.337296  527485 round_trippers.go:448] Response Status: 200 OK in 3 milliseconds
	I0526 21:23:57.337321  527485 round_trippers.go:451] Response Headers:
	I0526 21:23:57.337328  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:23:57.337333  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:23:57.337338  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:23:57.337343  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:23:57.337361  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:23:57 GMT
	I0526 21:23:57.337978  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"402","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6178 chars]
	I0526 21:23:57.833856  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:23:57.833883  527485 round_trippers.go:429] Request Headers:
	I0526 21:23:57.833890  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:23:57.833895  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:23:57.837012  527485 round_trippers.go:448] Response Status: 200 OK in 3 milliseconds
	I0526 21:23:57.837035  527485 round_trippers.go:451] Response Headers:
	I0526 21:23:57.837048  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:23:57.837053  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:23:57.837057  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:23:57.837062  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:23:57.837066  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:23:57 GMT
	I0526 21:23:57.837668  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"402","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6178 chars]
	I0526 21:23:58.334394  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:23:58.334420  527485 round_trippers.go:429] Request Headers:
	I0526 21:23:58.334427  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:23:58.334433  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:23:58.336766  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:23:58.336787  527485 round_trippers.go:451] Response Headers:
	I0526 21:23:58.336791  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:23:58.336794  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:23:58.336798  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:23:58 GMT
	I0526 21:23:58.336801  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:23:58.336808  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:23:58.336932  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"402","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6178 chars]
	I0526 21:23:58.833873  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:23:58.833903  527485 round_trippers.go:429] Request Headers:
	I0526 21:23:58.833908  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:23:58.833912  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:23:58.836708  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:23:58.836730  527485 round_trippers.go:451] Response Headers:
	I0526 21:23:58.836736  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:23:58.836740  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:23:58.836744  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:23:58 GMT
	I0526 21:23:58.836749  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:23:58.836752  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:23:58.836927  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"402","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6178 chars]
	I0526 21:23:59.334230  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:23:59.334253  527485 round_trippers.go:429] Request Headers:
	I0526 21:23:59.334258  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:23:59.334262  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:23:59.337146  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:23:59.337164  527485 round_trippers.go:451] Response Headers:
	I0526 21:23:59.337170  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:23:59.337175  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:23:59 GMT
	I0526 21:23:59.337181  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:23:59.337186  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:23:59.337191  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:23:59.337362  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"402","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6178 chars]
	I0526 21:23:59.337698  527485 node_ready.go:58] node "multinode-20210526212238-510955" has status "Ready":"False"
	I0526 21:23:59.833620  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:23:59.833640  527485 round_trippers.go:429] Request Headers:
	I0526 21:23:59.833645  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:23:59.833649  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:23:59.836447  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:23:59.836466  527485 round_trippers.go:451] Response Headers:
	I0526 21:23:59.836472  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:23:59.836476  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:23:59.836481  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:23:59 GMT
	I0526 21:23:59.836485  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:23:59.836490  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:23:59.836772  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"402","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6178 chars]
	I0526 21:24:00.333560  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:00.333597  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:00.333610  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:00.333620  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:00.336524  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:00.336538  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:00.336544  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:00 GMT
	I0526 21:24:00.336549  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:00.336553  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:00.336561  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:00.336565  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:00.337143  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"402","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6178 chars]
	I0526 21:24:00.834085  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:00.834110  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:00.834115  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:00.834120  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:00.837544  527485 round_trippers.go:448] Response Status: 200 OK in 3 milliseconds
	I0526 21:24:00.837560  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:00.837566  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:00.837570  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:00 GMT
	I0526 21:24:00.837573  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:00.837576  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:00.837580  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:00.837949  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"402","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6178 chars]
	I0526 21:24:01.333667  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:01.333707  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:01.333720  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:01.333730  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:01.338441  527485 round_trippers.go:448] Response Status: 200 OK in 4 milliseconds
	I0526 21:24:01.338459  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:01.338465  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:01 GMT
	I0526 21:24:01.338469  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:01.338473  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:01.338477  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:01.338482  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:01.338551  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"402","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6178 chars]
	I0526 21:24:01.338779  527485 node_ready.go:58] node "multinode-20210526212238-510955" has status "Ready":"False"
	I0526 21:24:01.834397  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:01.834414  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:01.834419  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:01.834423  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:01.839473  527485 round_trippers.go:448] Response Status: 200 OK in 5 milliseconds
	I0526 21:24:01.839495  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:01.839501  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:01.839508  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:01.839512  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:01.839517  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:01 GMT
	I0526 21:24:01.839523  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:01.839638  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"402","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6178 chars]
	I0526 21:24:02.334370  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:02.334390  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:02.334394  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:02.334398  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:02.336880  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:02.336895  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:02.336900  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:02 GMT
	I0526 21:24:02.336905  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:02.336909  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:02.336913  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:02.336918  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:02.337231  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"402","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6178 chars]
	I0526 21:24:02.834118  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:02.834141  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:02.834150  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:02.834158  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:02.836416  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:02.836432  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:02.836438  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:02.836441  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:02.836444  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:02 GMT
	I0526 21:24:02.836447  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:02.836452  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:02.836668  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"402","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6178 chars]
	I0526 21:24:03.334415  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:03.334435  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:03.334442  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:03.334448  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:03.337266  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:03.337283  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:03.337289  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:03.337293  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:03 GMT
	I0526 21:24:03.337297  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:03.337312  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:03.337316  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:03.337467  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"402","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6178 chars]
	I0526 21:24:03.833582  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:03.833623  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:03.833636  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:03.833647  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:03.836804  527485 round_trippers.go:448] Response Status: 200 OK in 3 milliseconds
	I0526 21:24:03.836821  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:03.836825  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:03.836829  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:03 GMT
	I0526 21:24:03.836832  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:03.836835  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:03.836838  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:03.837289  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"402","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6178 chars]
	I0526 21:24:03.837518  527485 node_ready.go:58] node "multinode-20210526212238-510955" has status "Ready":"False"
	I0526 21:24:04.333679  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:04.333734  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:04.333753  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:04.333776  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:04.336470  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:04.336488  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:04.336497  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:04.336501  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:04.336506  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:04.336510  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:04 GMT
	I0526 21:24:04.336517  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:04.336603  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"477","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 5994 chars]
	I0526 21:24:04.336882  527485 node_ready.go:49] node "multinode-20210526212238-510955" has status "Ready":"True"
	I0526 21:24:04.336901  527485 node_ready.go:38] duration metric: took 9.509909886s waiting for node "multinode-20210526212238-510955" to be "Ready" ...
	I0526 21:24:04.336912  527485 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0526 21:24:04.336985  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods
	I0526 21:24:04.337006  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:04.337013  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:04.337019  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:04.339707  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:04.339722  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:04.339727  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:04.339732  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:04.339735  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:04.339738  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:04.339742  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:04 GMT
	I0526 21:24:04.340833  527485 request.go:1107] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"478"},"items":[{"metadata":{"name":"coredns-74ff55c5b-tw67b","generateName":"coredns-74ff55c5b-","namespace":"kube-system","uid":"a0522c32-9960-4c21-8a5a-d0b137009166","resourceVersion":"421","creationTimestamp":"2021-05-26T21:23:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"74ff55c5b"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-74ff55c5b","uid":"605a8716-bb95-4dc9-a518-ec6d2a3d080e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"605a8716-bb95-4dc9-a518-ec6d2a3d080e\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},
"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"n [truncated 50612 chars]
	I0526 21:24:04.348904  527485 pod_ready.go:78] waiting up to 6m0s for pod "coredns-74ff55c5b-tw67b" in "kube-system" namespace to be "Ready" ...
	I0526 21:24:04.348965  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/coredns-74ff55c5b-tw67b
	I0526 21:24:04.348971  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:04.348976  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:04.348980  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:04.351219  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:04.351232  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:04.351237  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:04.351242  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:04.351247  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:04.351251  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:04.351255  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:04 GMT
	I0526 21:24:04.351324  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-74ff55c5b-tw67b","generateName":"coredns-74ff55c5b-","namespace":"kube-system","uid":"a0522c32-9960-4c21-8a5a-d0b137009166","resourceVersion":"421","creationTimestamp":"2021-05-26T21:23:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"74ff55c5b"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-74ff55c5b","uid":"605a8716-bb95-4dc9-a518-ec6d2a3d080e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"605a8716-bb95-4dc9-a518-ec6d2a3d080e\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:
containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{ [truncated 4425 chars]
	I0526 21:24:04.856942  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/coredns-74ff55c5b-tw67b
	I0526 21:24:04.856987  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:04.857002  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:04.857014  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:04.859771  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:04.859786  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:04.859790  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:04.859794  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:04.859797  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:04.859800  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:04.859802  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:04 GMT
	I0526 21:24:04.860245  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-74ff55c5b-tw67b","generateName":"coredns-74ff55c5b-","namespace":"kube-system","uid":"a0522c32-9960-4c21-8a5a-d0b137009166","resourceVersion":"421","creationTimestamp":"2021-05-26T21:23:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"74ff55c5b"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-74ff55c5b","uid":"605a8716-bb95-4dc9-a518-ec6d2a3d080e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"605a8716-bb95-4dc9-a518-ec6d2a3d080e\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:
containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{ [truncated 4425 chars]
	I0526 21:24:05.356468  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/coredns-74ff55c5b-tw67b
	I0526 21:24:05.356512  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:05.356524  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:05.356535  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:05.358567  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:05.358589  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:05.358595  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:05.358600  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:05 GMT
	I0526 21:24:05.358604  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:05.358608  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:05.358612  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:05.359053  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-74ff55c5b-tw67b","generateName":"coredns-74ff55c5b-","namespace":"kube-system","uid":"a0522c32-9960-4c21-8a5a-d0b137009166","resourceVersion":"421","creationTimestamp":"2021-05-26T21:23:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"74ff55c5b"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-74ff55c5b","uid":"605a8716-bb95-4dc9-a518-ec6d2a3d080e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"605a8716-bb95-4dc9-a518-ec6d2a3d080e\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:
containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{ [truncated 4425 chars]
	I0526 21:24:05.857110  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/coredns-74ff55c5b-tw67b
	I0526 21:24:05.857153  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:05.857166  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:05.857177  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:05.861312  527485 round_trippers.go:448] Response Status: 200 OK in 4 milliseconds
	I0526 21:24:05.861331  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:05.861337  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:05 GMT
	I0526 21:24:05.861344  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:05.861348  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:05.861352  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:05.861357  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:05.862236  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-74ff55c5b-tw67b","generateName":"coredns-74ff55c5b-","namespace":"kube-system","uid":"a0522c32-9960-4c21-8a5a-d0b137009166","resourceVersion":"421","creationTimestamp":"2021-05-26T21:23:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"74ff55c5b"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-74ff55c5b","uid":"605a8716-bb95-4dc9-a518-ec6d2a3d080e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"605a8716-bb95-4dc9-a518-ec6d2a3d080e\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:
containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{ [truncated 4425 chars]
	I0526 21:24:06.357234  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/coredns-74ff55c5b-tw67b
	I0526 21:24:06.357280  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:06.357294  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:06.357304  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:06.360607  527485 round_trippers.go:448] Response Status: 200 OK in 3 milliseconds
	I0526 21:24:06.360627  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:06.360633  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:06.360638  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:06.360644  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:06.360649  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:06.360654  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:06 GMT
	I0526 21:24:06.360824  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-74ff55c5b-tw67b","generateName":"coredns-74ff55c5b-","namespace":"kube-system","uid":"a0522c32-9960-4c21-8a5a-d0b137009166","resourceVersion":"421","creationTimestamp":"2021-05-26T21:23:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"74ff55c5b"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-74ff55c5b","uid":"605a8716-bb95-4dc9-a518-ec6d2a3d080e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"605a8716-bb95-4dc9-a518-ec6d2a3d080e\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:
containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{ [truncated 4425 chars]
	I0526 21:24:06.361411  527485 pod_ready.go:102] pod "coredns-74ff55c5b-tw67b" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-05-26 21:23:53 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0526 21:24:06.856568  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/coredns-74ff55c5b-tw67b
	I0526 21:24:06.856608  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:06.856639  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:06.856661  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:06.858795  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:06.858812  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:06.858816  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:06.858827  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:06.858838  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:06 GMT
	I0526 21:24:06.858844  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:06.858857  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:06.859182  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-74ff55c5b-tw67b","generateName":"coredns-74ff55c5b-","namespace":"kube-system","uid":"a0522c32-9960-4c21-8a5a-d0b137009166","resourceVersion":"421","creationTimestamp":"2021-05-26T21:23:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"74ff55c5b"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-74ff55c5b","uid":"605a8716-bb95-4dc9-a518-ec6d2a3d080e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"605a8716-bb95-4dc9-a518-ec6d2a3d080e\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:
containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{ [truncated 4425 chars]
	I0526 21:24:07.357084  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/coredns-74ff55c5b-tw67b
	I0526 21:24:07.357102  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:07.357107  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:07.357111  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:07.359624  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:07.359644  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:07.359650  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:07.359655  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:07.359661  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:07.359667  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:07.359671  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:07 GMT
	I0526 21:24:07.359972  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-74ff55c5b-tw67b","generateName":"coredns-74ff55c5b-","namespace":"kube-system","uid":"a0522c32-9960-4c21-8a5a-d0b137009166","resourceVersion":"421","creationTimestamp":"2021-05-26T21:23:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"74ff55c5b"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-74ff55c5b","uid":"605a8716-bb95-4dc9-a518-ec6d2a3d080e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"605a8716-bb95-4dc9-a518-ec6d2a3d080e\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:
containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{ [truncated 4425 chars]
	I0526 21:24:07.856713  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/coredns-74ff55c5b-tw67b
	I0526 21:24:07.856733  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:07.856738  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:07.856742  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:07.859475  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:07.859491  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:07.859497  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:07 GMT
	I0526 21:24:07.859501  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:07.859506  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:07.859510  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:07.859516  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:07.859595  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-74ff55c5b-tw67b","generateName":"coredns-74ff55c5b-","namespace":"kube-system","uid":"a0522c32-9960-4c21-8a5a-d0b137009166","resourceVersion":"421","creationTimestamp":"2021-05-26T21:23:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"74ff55c5b"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-74ff55c5b","uid":"605a8716-bb95-4dc9-a518-ec6d2a3d080e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"605a8716-bb95-4dc9-a518-ec6d2a3d080e\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:
containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{ [truncated 4425 chars]
	I0526 21:24:08.356263  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/coredns-74ff55c5b-tw67b
	I0526 21:24:08.356286  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:08.356291  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:08.356302  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:08.358596  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:08.358613  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:08.358618  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:08.358625  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:08 GMT
	I0526 21:24:08.358629  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:08.358633  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:08.358637  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:08.359132  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-74ff55c5b-tw67b","generateName":"coredns-74ff55c5b-","namespace":"kube-system","uid":"a0522c32-9960-4c21-8a5a-d0b137009166","resourceVersion":"421","creationTimestamp":"2021-05-26T21:23:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"74ff55c5b"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-74ff55c5b","uid":"605a8716-bb95-4dc9-a518-ec6d2a3d080e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"605a8716-bb95-4dc9-a518-ec6d2a3d080e\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:
containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{ [truncated 4425 chars]
	I0526 21:24:08.857128  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/coredns-74ff55c5b-tw67b
	I0526 21:24:08.857147  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:08.857152  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:08.857156  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:08.859899  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:08.859917  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:08.859921  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:08.859925  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:08.859928  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:08.859933  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:08.859939  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:08 GMT
	I0526 21:24:08.860080  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-74ff55c5b-tw67b","generateName":"coredns-74ff55c5b-","namespace":"kube-system","uid":"a0522c32-9960-4c21-8a5a-d0b137009166","resourceVersion":"421","creationTimestamp":"2021-05-26T21:23:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"74ff55c5b"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-74ff55c5b","uid":"605a8716-bb95-4dc9-a518-ec6d2a3d080e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"605a8716-bb95-4dc9-a518-ec6d2a3d080e\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:
containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{ [truncated 4425 chars]
	I0526 21:24:08.860329  527485 pod_ready.go:102] pod "coredns-74ff55c5b-tw67b" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-05-26 21:23:53 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0526 21:24:09.356213  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/coredns-74ff55c5b-tw67b
	I0526 21:24:09.356243  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:09.356251  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:09.356256  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:09.360376  527485 round_trippers.go:448] Response Status: 200 OK in 4 milliseconds
	I0526 21:24:09.360393  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:09.360398  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:09.360401  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:09 GMT
	I0526 21:24:09.360404  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:09.360407  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:09.360411  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:09.361262  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-74ff55c5b-tw67b","generateName":"coredns-74ff55c5b-","namespace":"kube-system","uid":"a0522c32-9960-4c21-8a5a-d0b137009166","resourceVersion":"485","creationTimestamp":"2021-05-26T21:23:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"74ff55c5b"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-74ff55c5b","uid":"605a8716-bb95-4dc9-a518-ec6d2a3d080e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"605a8716-bb95-4dc9-a518-ec6d2a3d080e\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:
containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{ [truncated 5675 chars]
	I0526 21:24:09.361595  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:09.361610  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:09.361615  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:09.361618  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:09.363586  527485 round_trippers.go:448] Response Status: 200 OK in 1 milliseconds
	I0526 21:24:09.363598  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:09.363603  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:09.363608  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:09.363612  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:09.363616  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:09 GMT
	I0526 21:24:09.363621  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:09.364206  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"477","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 5994 chars]
	I0526 21:24:09.857093  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/coredns-74ff55c5b-tw67b
	I0526 21:24:09.857121  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:09.857134  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:09.857138  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:09.860493  527485 round_trippers.go:448] Response Status: 200 OK in 3 milliseconds
	I0526 21:24:09.860508  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:09.860514  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:09.860517  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:09.860521  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:09.860525  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:09.860530  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:09 GMT
	I0526 21:24:09.860608  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-74ff55c5b-tw67b","generateName":"coredns-74ff55c5b-","namespace":"kube-system","uid":"a0522c32-9960-4c21-8a5a-d0b137009166","resourceVersion":"485","creationTimestamp":"2021-05-26T21:23:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"74ff55c5b"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-74ff55c5b","uid":"605a8716-bb95-4dc9-a518-ec6d2a3d080e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"605a8716-bb95-4dc9-a518-ec6d2a3d080e\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:
containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{ [truncated 5675 chars]
	I0526 21:24:09.860950  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:09.860964  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:09.860969  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:09.860974  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:09.863188  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:09.863204  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:09.863209  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:09.863213  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:09.863217  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:09.863219  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:09.863222  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:09 GMT
	I0526 21:24:09.863392  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"477","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 5994 chars]
	I0526 21:24:10.357197  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/coredns-74ff55c5b-tw67b
	I0526 21:24:10.357222  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:10.357228  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:10.357232  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:10.359608  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:10.359630  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:10.359634  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:10.359637  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:10.359640  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:10 GMT
	I0526 21:24:10.359646  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:10.359649  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:10.360273  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-74ff55c5b-tw67b","generateName":"coredns-74ff55c5b-","namespace":"kube-system","uid":"a0522c32-9960-4c21-8a5a-d0b137009166","resourceVersion":"493","creationTimestamp":"2021-05-26T21:23:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"74ff55c5b"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-74ff55c5b","uid":"605a8716-bb95-4dc9-a518-ec6d2a3d080e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"605a8716-bb95-4dc9-a518-ec6d2a3d080e\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:
containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{ [truncated 6009 chars]
	I0526 21:24:10.360614  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:10.360627  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:10.360631  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:10.360635  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:10.362958  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:10.362976  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:10.362980  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:10.362985  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:10.362988  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:10.362992  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:10.362996  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:10 GMT
	I0526 21:24:10.363216  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"477","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 5994 chars]
	I0526 21:24:10.857155  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/coredns-74ff55c5b-tw67b
	I0526 21:24:10.857180  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:10.857185  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:10.857190  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:10.859704  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:10.859721  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:10.859725  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:10.859728  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:10.859731  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:10.859735  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:10.859738  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:10 GMT
	I0526 21:24:10.860472  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-74ff55c5b-tw67b","generateName":"coredns-74ff55c5b-","namespace":"kube-system","uid":"a0522c32-9960-4c21-8a5a-d0b137009166","resourceVersion":"493","creationTimestamp":"2021-05-26T21:23:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"74ff55c5b"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-74ff55c5b","uid":"605a8716-bb95-4dc9-a518-ec6d2a3d080e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"605a8716-bb95-4dc9-a518-ec6d2a3d080e\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:
containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{ [truncated 6009 chars]
	I0526 21:24:10.860834  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:10.860854  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:10.860879  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:10.860892  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:10.862660  527485 round_trippers.go:448] Response Status: 200 OK in 1 milliseconds
	I0526 21:24:10.862674  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:10.862678  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:10.862684  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:10.862689  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:10.862701  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:10.862706  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:10 GMT
	I0526 21:24:10.863031  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"477","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 5994 chars]
	I0526 21:24:10.863289  527485 pod_ready.go:102] pod "coredns-74ff55c5b-tw67b" in "kube-system" namespace has status "Ready":"False"
	I0526 21:24:11.357019  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/coredns-74ff55c5b-tw67b
	I0526 21:24:11.357060  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:11.357086  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:11.357091  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:11.359266  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:11.359279  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:11.359283  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:11.359286  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:11.359289  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:11.359292  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:11.359295  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:11 GMT
	I0526 21:24:11.359589  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-74ff55c5b-tw67b","generateName":"coredns-74ff55c5b-","namespace":"kube-system","uid":"a0522c32-9960-4c21-8a5a-d0b137009166","resourceVersion":"493","creationTimestamp":"2021-05-26T21:23:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"74ff55c5b"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-74ff55c5b","uid":"605a8716-bb95-4dc9-a518-ec6d2a3d080e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"605a8716-bb95-4dc9-a518-ec6d2a3d080e\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:
containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{ [truncated 6009 chars]
	I0526 21:24:11.359886  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:11.359897  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:11.359902  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:11.359906  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:11.361862  527485 round_trippers.go:448] Response Status: 200 OK in 1 milliseconds
	I0526 21:24:11.361877  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:11.361883  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:11.361888  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:11.361893  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:11.361899  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:11.361904  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:11 GMT
	I0526 21:24:11.362186  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"477","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 5994 chars]
	I0526 21:24:11.857052  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/coredns-74ff55c5b-tw67b
	I0526 21:24:11.857071  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:11.857076  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:11.857080  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:11.859617  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:11.859634  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:11.859640  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:11.859645  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:11.859651  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:11.859661  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:11.859666  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:11 GMT
	I0526 21:24:11.859919  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-74ff55c5b-tw67b","generateName":"coredns-74ff55c5b-","namespace":"kube-system","uid":"a0522c32-9960-4c21-8a5a-d0b137009166","resourceVersion":"493","creationTimestamp":"2021-05-26T21:23:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"74ff55c5b"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-74ff55c5b","uid":"605a8716-bb95-4dc9-a518-ec6d2a3d080e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"605a8716-bb95-4dc9-a518-ec6d2a3d080e\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:
containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{ [truncated 6009 chars]
	I0526 21:24:11.860195  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:11.860207  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:11.860212  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:11.860216  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:11.862368  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:11.862382  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:11.862385  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:11.862389  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:11.862391  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:11.862394  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:11.862399  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:11 GMT
	I0526 21:24:11.862557  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"477","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 5994 chars]
	I0526 21:24:12.356389  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/coredns-74ff55c5b-tw67b
	I0526 21:24:12.356427  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:12.356446  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:12.356459  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:12.359392  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:12.359406  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:12.359410  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:12.359413  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:12.359416  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:12.359419  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:12.359422  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:12 GMT
	I0526 21:24:12.359513  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-74ff55c5b-tw67b","generateName":"coredns-74ff55c5b-","namespace":"kube-system","uid":"a0522c32-9960-4c21-8a5a-d0b137009166","resourceVersion":"493","creationTimestamp":"2021-05-26T21:23:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"74ff55c5b"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-74ff55c5b","uid":"605a8716-bb95-4dc9-a518-ec6d2a3d080e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"605a8716-bb95-4dc9-a518-ec6d2a3d080e\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:
containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{ [truncated 6009 chars]
	I0526 21:24:12.359797  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:12.359817  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:12.359821  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:12.359825  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:12.362817  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:12.362831  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:12.362835  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:12.362838  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:12.362842  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:12.362845  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:12.362848  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:12 GMT
	I0526 21:24:12.363080  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"477","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 5994 chars]
	I0526 21:24:12.857020  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/coredns-74ff55c5b-tw67b
	I0526 21:24:12.857067  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:12.857101  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:12.857107  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:12.860555  527485 round_trippers.go:448] Response Status: 200 OK in 3 milliseconds
	I0526 21:24:12.860572  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:12.860580  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:12 GMT
	I0526 21:24:12.860584  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:12.860588  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:12.860591  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:12.860595  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:12.860784  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-74ff55c5b-tw67b","generateName":"coredns-74ff55c5b-","namespace":"kube-system","uid":"a0522c32-9960-4c21-8a5a-d0b137009166","resourceVersion":"493","creationTimestamp":"2021-05-26T21:23:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"74ff55c5b"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-74ff55c5b","uid":"605a8716-bb95-4dc9-a518-ec6d2a3d080e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"605a8716-bb95-4dc9-a518-ec6d2a3d080e\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:
containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{ [truncated 6009 chars]
	I0526 21:24:12.861111  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:12.861128  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:12.861134  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:12.861140  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:12.863018  527485 round_trippers.go:448] Response Status: 200 OK in 1 milliseconds
	I0526 21:24:12.863031  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:12.863036  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:12.863041  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:12.863045  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:12 GMT
	I0526 21:24:12.863049  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:12.863053  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:12.863215  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"477","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 5994 chars]
	I0526 21:24:12.863420  527485 pod_ready.go:102] pod "coredns-74ff55c5b-tw67b" in "kube-system" namespace has status "Ready":"False"
	I0526 21:24:13.357183  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/coredns-74ff55c5b-tw67b
	I0526 21:24:13.357220  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:13.357235  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:13.357245  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:13.360137  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:13.360151  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:13.360154  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:13 GMT
	I0526 21:24:13.360157  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:13.360160  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:13.360163  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:13.360165  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:13.360614  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-74ff55c5b-tw67b","generateName":"coredns-74ff55c5b-","namespace":"kube-system","uid":"a0522c32-9960-4c21-8a5a-d0b137009166","resourceVersion":"493","creationTimestamp":"2021-05-26T21:23:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"74ff55c5b"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-74ff55c5b","uid":"605a8716-bb95-4dc9-a518-ec6d2a3d080e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"605a8716-bb95-4dc9-a518-ec6d2a3d080e\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:
containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{ [truncated 6009 chars]
	I0526 21:24:13.360893  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:13.360904  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:13.360908  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:13.360912  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:13.363029  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:13.363041  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:13.363044  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:13.363047  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:13.363050  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:13.363053  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:13 GMT
	I0526 21:24:13.363056  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:13.363173  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"477","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 5994 chars]
	I0526 21:24:13.857019  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/coredns-74ff55c5b-tw67b
	I0526 21:24:13.857035  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:13.857041  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:13.857045  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:13.860110  527485 round_trippers.go:448] Response Status: 200 OK in 3 milliseconds
	I0526 21:24:13.860121  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:13.860125  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:13 GMT
	I0526 21:24:13.860140  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:13.860143  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:13.860146  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:13.860150  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:13.860345  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-74ff55c5b-tw67b","generateName":"coredns-74ff55c5b-","namespace":"kube-system","uid":"a0522c32-9960-4c21-8a5a-d0b137009166","resourceVersion":"493","creationTimestamp":"2021-05-26T21:23:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"74ff55c5b"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-74ff55c5b","uid":"605a8716-bb95-4dc9-a518-ec6d2a3d080e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"605a8716-bb95-4dc9-a518-ec6d2a3d080e\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:
containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{ [truncated 6009 chars]
	I0526 21:24:13.860666  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:13.860681  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:13.860686  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:13.860690  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:13.862917  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:13.862928  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:13.862932  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:13.862935  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:13.862938  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:13.862941  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:13.862944  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:13 GMT
	I0526 21:24:13.863107  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"477","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 5994 chars]
	I0526 21:24:14.356355  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/coredns-74ff55c5b-tw67b
	I0526 21:24:14.356401  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:14.356414  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:14.356440  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:14.359191  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:14.359208  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:14.359212  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:14.359217  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:14.359221  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:14.359224  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:14 GMT
	I0526 21:24:14.359227  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:14.359382  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-74ff55c5b-tw67b","generateName":"coredns-74ff55c5b-","namespace":"kube-system","uid":"a0522c32-9960-4c21-8a5a-d0b137009166","resourceVersion":"500","creationTimestamp":"2021-05-26T21:23:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"74ff55c5b"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-74ff55c5b","uid":"605a8716-bb95-4dc9-a518-ec6d2a3d080e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"605a8716-bb95-4dc9-a518-ec6d2a3d080e\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:
containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{ [truncated 5780 chars]
	I0526 21:24:14.359789  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:14.359812  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:14.359819  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:14.359825  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:14.362141  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:14.362155  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:14.362161  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:14.362165  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:14.362170  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:14.362174  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:14.362177  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:14 GMT
	I0526 21:24:14.362701  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:14.362941  527485 pod_ready.go:92] pod "coredns-74ff55c5b-tw67b" in "kube-system" namespace has status "Ready":"True"
	I0526 21:24:14.362971  527485 pod_ready.go:81] duration metric: took 10.014041717s waiting for pod "coredns-74ff55c5b-tw67b" in "kube-system" namespace to be "Ready" ...
	I0526 21:24:14.362983  527485 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-20210526212238-510955" in "kube-system" namespace to be "Ready" ...
	I0526 21:24:14.363078  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:14.363089  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:14.363093  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:14.363097  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:14.365117  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:14.365130  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:14.365134  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:14.365137  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:14 GMT
	I0526 21:24:14.365140  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:14.365143  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:14.365145  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:14.365835  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:14.366143  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:14.366156  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:14.366161  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:14.366165  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:14.368454  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:14.368466  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:14.368470  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:14.368473  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:14.368476  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:14.368479  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:14.368482  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:14 GMT
	I0526 21:24:14.369060  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:14.869807  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:14.869825  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:14.869832  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:14.869844  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:14.872520  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:14.872541  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:14.872546  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:14.872551  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:14.872555  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:14.872559  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:14.872564  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:14 GMT
	I0526 21:24:14.873080  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:14.873368  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:14.873380  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:14.873385  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:14.873388  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:14.876066  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:14.876083  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:14.876088  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:14.876093  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:14.876100  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:14.876104  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:14.876108  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:14 GMT
	I0526 21:24:14.876633  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:15.370422  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:15.370441  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:15.370446  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:15.370456  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:15.372511  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:15.372523  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:15.372527  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:15.372530  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:15.372536  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:15.372539  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:15.372542  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:15 GMT
	I0526 21:24:15.372964  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:15.373230  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:15.373242  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:15.373247  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:15.373250  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:15.375520  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:15.375534  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:15.375539  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:15 GMT
	I0526 21:24:15.375544  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:15.375548  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:15.375552  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:15.375557  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:15.375887  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:15.869629  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:15.869668  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:15.869681  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:15.869692  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:15.872171  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:15.872184  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:15.872188  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:15 GMT
	I0526 21:24:15.872191  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:15.872196  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:15.872199  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:15.872202  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:15.872469  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:15.872744  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:15.872757  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:15.872761  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:15.872765  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:15.874496  527485 round_trippers.go:448] Response Status: 200 OK in 1 milliseconds
	I0526 21:24:15.874509  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:15.874513  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:15.874516  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:15.874519  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:15.874522  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:15.874525  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:15 GMT
	I0526 21:24:15.874918  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:16.369696  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:16.369716  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:16.369720  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:16.369724  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:16.372701  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:16.372717  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:16.372723  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:16.372728  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:16.372732  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:16.372736  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:16.372740  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:16 GMT
	I0526 21:24:16.373179  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:16.373418  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:16.373429  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:16.373433  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:16.373437  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:16.375785  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:16.375805  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:16.375811  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:16.375816  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:16.375821  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:16.375829  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:16.375834  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:16 GMT
	I0526 21:24:16.376395  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:16.376645  527485 pod_ready.go:102] pod "etcd-multinode-20210526212238-510955" in "kube-system" namespace has status "Ready":"False"
	I0526 21:24:16.870142  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:16.870164  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:16.870169  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:16.870173  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:16.872964  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:16.872986  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:16.872992  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:16.872997  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:16.873000  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:16.873005  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:16.873010  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:16 GMT
	I0526 21:24:16.873175  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:16.873535  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:16.873554  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:16.873561  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:16.873568  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:16.875490  527485 round_trippers.go:448] Response Status: 200 OK in 1 milliseconds
	I0526 21:24:16.875509  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:16.875515  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:16.875520  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:16.875523  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:16.875531  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:16.875543  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:16 GMT
	I0526 21:24:16.875712  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:17.369529  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:17.369567  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:17.369580  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:17.369590  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:17.372684  527485 round_trippers.go:448] Response Status: 200 OK in 3 milliseconds
	I0526 21:24:17.372699  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:17.372703  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:17.372706  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:17.372709  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:17.372713  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:17 GMT
	I0526 21:24:17.372716  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:17.373072  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:17.373318  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:17.373328  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:17.373335  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:17.373338  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:17.376353  527485 round_trippers.go:448] Response Status: 200 OK in 3 milliseconds
	I0526 21:24:17.376371  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:17.376377  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:17.376382  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:17.376387  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:17.376393  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:17 GMT
	I0526 21:24:17.376398  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:17.376756  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:17.870452  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:17.870469  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:17.870474  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:17.870478  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:17.874427  527485 round_trippers.go:448] Response Status: 200 OK in 3 milliseconds
	I0526 21:24:17.874444  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:17.874449  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:17.874454  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:17.874458  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:17.874463  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:17 GMT
	I0526 21:24:17.874467  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:17.874573  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:17.874901  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:17.874922  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:17.874929  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:17.874935  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:17.880463  527485 round_trippers.go:448] Response Status: 200 OK in 5 milliseconds
	I0526 21:24:17.880478  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:17.880481  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:17 GMT
	I0526 21:24:17.880485  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:17.880488  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:17.880490  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:17.880494  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:17.880837  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:18.369893  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:18.369914  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:18.369919  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:18.369923  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:18.371891  527485 round_trippers.go:448] Response Status: 200 OK in 1 milliseconds
	I0526 21:24:18.371905  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:18.371910  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:18.371914  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:18.371919  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:18.371923  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:18.371928  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:18 GMT
	I0526 21:24:18.372424  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:18.372720  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:18.372734  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:18.372738  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:18.372742  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:18.374386  527485 round_trippers.go:448] Response Status: 200 OK in 1 milliseconds
	I0526 21:24:18.374400  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:18.374405  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:18.374409  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:18.374413  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:18 GMT
	I0526 21:24:18.374417  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:18.374421  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:18.374780  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:18.869893  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:18.869917  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:18.869922  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:18.869925  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:18.873021  527485 round_trippers.go:448] Response Status: 200 OK in 3 milliseconds
	I0526 21:24:18.873035  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:18.873041  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:18.873045  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:18.873048  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:18 GMT
	I0526 21:24:18.873051  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:18.873054  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:18.873430  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:18.873684  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:18.873695  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:18.873699  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:18.873703  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:18.875538  527485 round_trippers.go:448] Response Status: 200 OK in 1 milliseconds
	I0526 21:24:18.875552  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:18.875557  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:18.875564  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:18.875573  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:18.875578  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:18.875586  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:18 GMT
	I0526 21:24:18.875767  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:18.875976  527485 pod_ready.go:102] pod "etcd-multinode-20210526212238-510955" in "kube-system" namespace has status "Ready":"False"
	I0526 21:24:19.369508  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:19.369543  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:19.369555  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:19.369564  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:19.371942  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:19.371957  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:19.371962  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:19.371967  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:19 GMT
	I0526 21:24:19.371971  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:19.371976  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:19.371981  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:19.372167  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:19.372453  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:19.372466  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:19.372470  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:19.372474  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:19.375361  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:19.375373  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:19.375377  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:19 GMT
	I0526 21:24:19.375381  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:19.375385  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:19.375389  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:19.375393  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:19.375829  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:19.869544  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:19.869588  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:19.869611  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:19.869626  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:19.872766  527485 round_trippers.go:448] Response Status: 200 OK in 3 milliseconds
	I0526 21:24:19.872784  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:19.872789  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:19.872794  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:19.872798  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:19.872802  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:19.872807  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:19 GMT
	I0526 21:24:19.872911  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:19.873235  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:19.873257  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:19.873264  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:19.873270  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:19.875000  527485 round_trippers.go:448] Response Status: 200 OK in 1 milliseconds
	I0526 21:24:19.875010  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:19.875015  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:19.875019  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:19.875023  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:19.875028  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:19.875032  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:19 GMT
	I0526 21:24:19.875400  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:20.370112  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:20.370137  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:20.370142  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:20.370145  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:20.372387  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:20.372407  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:20.372412  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:20.372417  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:20.372421  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:20.372425  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:20 GMT
	I0526 21:24:20.372430  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:20.373097  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:20.373411  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:20.373426  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:20.373430  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:20.373437  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:20.375520  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:20.375531  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:20.375534  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:20.375537  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:20.375540  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:20.375544  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:20.375546  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:20 GMT
	I0526 21:24:20.375769  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:20.869429  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:20.869451  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:20.869456  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:20.869460  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:20.872086  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:20.872105  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:20.872111  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:20.872116  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:20 GMT
	I0526 21:24:20.872120  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:20.872136  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:20.872145  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:20.872317  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:20.872677  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:20.872692  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:20.872697  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:20.872700  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:20.875254  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:20.875264  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:20.875267  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:20.875270  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:20 GMT
	I0526 21:24:20.875273  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:20.875276  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:20.875279  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:20.875984  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:20.876211  527485 pod_ready.go:102] pod "etcd-multinode-20210526212238-510955" in "kube-system" namespace has status "Ready":"False"
	I0526 21:24:21.369735  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:21.369762  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:21.369768  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:21.369778  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:21.372058  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:21.372075  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:21.372080  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:21.372085  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:21.372089  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:21.372094  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:21.372098  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:21 GMT
	I0526 21:24:21.372397  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:21.372747  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:21.372766  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:21.372773  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:21.372778  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:21.375050  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:21.375067  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:21.375076  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:21.375080  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:21.375083  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:21.375088  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:21.375092  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:21 GMT
	I0526 21:24:21.375473  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:21.870195  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:21.870211  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:21.870220  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:21.870232  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:21.872782  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:21.872796  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:21.872800  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:21.872803  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:21.872806  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:21.872809  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:21.872815  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:21 GMT
	I0526 21:24:21.873179  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:21.873495  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:21.873513  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:21.873520  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:21.873526  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:21.875433  527485 round_trippers.go:448] Response Status: 200 OK in 1 milliseconds
	I0526 21:24:21.875444  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:21.875448  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:21.875451  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:21.875454  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:21.875456  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:21.875459  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:21 GMT
	I0526 21:24:21.876085  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:22.369949  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:22.369969  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:22.369976  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:22.369982  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:22.372425  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:22.372442  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:22.372447  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:22.372452  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:22.372456  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:22 GMT
	I0526 21:24:22.372460  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:22.372465  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:22.372733  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:22.373291  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:22.373309  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:22.373313  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:22.373317  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:22.374939  527485 round_trippers.go:448] Response Status: 200 OK in 1 milliseconds
	I0526 21:24:22.374952  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:22.374958  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:22.374962  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:22.374967  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:22.374972  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:22.374977  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:22 GMT
	I0526 21:24:22.375166  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:22.870126  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:22.870148  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:22.870155  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:22.870159  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:22.872882  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:22.872897  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:22.872902  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:22.872907  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:22.872912  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:22.872916  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:22.872921  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:22 GMT
	I0526 21:24:22.873421  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:22.873664  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:22.873674  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:22.873678  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:22.873682  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:22.875830  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:22.875843  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:22.875847  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:22.875850  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:22.875852  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:22 GMT
	I0526 21:24:22.875855  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:22.875858  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:22.876041  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:22.876304  527485 pod_ready.go:102] pod "etcd-multinode-20210526212238-510955" in "kube-system" namespace has status "Ready":"False"
	I0526 21:24:23.369961  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:23.370000  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:23.370005  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:23.370009  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:23.372425  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:23.372437  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:23.372442  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:23.372447  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:23.372451  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:23.372456  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:23.372459  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:23 GMT
	I0526 21:24:23.372740  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:23.373035  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:23.373048  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:23.373053  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:23.373057  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:23.375242  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:23.375256  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:23.375261  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:23.375266  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:23.375270  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:23.375275  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:23.375279  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:23 GMT
	I0526 21:24:23.375675  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:23.869595  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:23.869619  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:23.869625  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:23.869631  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:23.871916  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:23.871937  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:23.871942  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:23.871946  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:23.871951  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:23.871955  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:23 GMT
	I0526 21:24:23.871959  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:23.872291  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:23.872581  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:23.872594  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:23.872598  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:23.872602  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:23.874793  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:23.874814  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:23.874820  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:23.874824  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:23.874828  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:23.874833  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:23.874838  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:23 GMT
	I0526 21:24:23.875289  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:24.370204  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:24.370229  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:24.370234  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:24.370238  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:24.373042  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:24.373063  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:24.373069  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:24.373072  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:24.373075  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:24 GMT
	I0526 21:24:24.373079  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:24.373082  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:24.373747  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:24.374026  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:24.374037  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:24.374042  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:24.374045  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:24.376408  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:24.376425  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:24.376431  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:24.376436  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:24.376439  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:24.376443  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:24 GMT
	I0526 21:24:24.376445  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:24.376997  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:24.869876  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:24.869900  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:24.869905  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:24.869909  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:24.898018  527485 round_trippers.go:448] Response Status: 200 OK in 28 milliseconds
	I0526 21:24:24.898034  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:24.898038  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:24.898041  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:24.898045  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:24.898047  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:24 GMT
	I0526 21:24:24.898050  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:24.898227  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:24.898647  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:24.898666  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:24.898673  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:24.898678  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:24.900754  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:24.900765  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:24.900768  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:24 GMT
	I0526 21:24:24.900771  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:24.900774  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:24.900777  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:24.900780  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:24.901160  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:24.901438  527485 pod_ready.go:102] pod "etcd-multinode-20210526212238-510955" in "kube-system" namespace has status "Ready":"False"
	I0526 21:24:25.370114  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:25.370133  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:25.370175  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:25.370185  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:25.372773  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:25.372787  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:25.372799  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:25.372803  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:25.372806  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:25.372810  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:25.372813  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:25 GMT
	I0526 21:24:25.373161  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:25.373561  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:25.373577  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:25.373583  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:25.373589  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:25.376172  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:25.376187  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:25.376192  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:25.376197  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:25.376201  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:25.376205  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:25.376214  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:25 GMT
	I0526 21:24:25.376616  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:25.870571  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:25.870619  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:25.870638  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:25.870660  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:25.873229  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:25.873242  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:25.873252  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:25.873257  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:25.873261  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:25.873266  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:25 GMT
	I0526 21:24:25.873271  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:25.873708  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:25.874079  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:25.874095  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:25.874101  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:25.874107  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:25.876566  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:25.876578  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:25.876583  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:25.876588  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:25 GMT
	I0526 21:24:25.876593  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:25.876597  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:25.876602  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:25.876750  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:26.369551  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:26.369598  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:26.369616  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:26.369633  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:26.372510  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:26.372526  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:26.372531  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:26.372536  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:26.372540  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:26.372554  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:26.372558  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:26 GMT
	I0526 21:24:26.373076  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:26.373730  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:26.373756  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:26.373762  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:26.373769  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:26.376269  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:26.376286  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:26.376294  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:26.376300  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:26.376304  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:26.376308  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:26.376313  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:26 GMT
	I0526 21:24:26.376652  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:26.870464  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:26.870490  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:26.870495  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:26.870499  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:26.873690  527485 round_trippers.go:448] Response Status: 200 OK in 3 milliseconds
	I0526 21:24:26.873705  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:26.873710  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:26.873715  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:26.873719  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:26.873723  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:26 GMT
	I0526 21:24:26.873728  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:26.873983  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:26.874344  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:26.874360  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:26.874367  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:26.874373  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:26.877178  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:26.877191  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:26.877196  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:26.877200  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:26.877205  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:26.877209  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:26.877214  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:26 GMT
	I0526 21:24:26.877922  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:27.369631  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:27.369656  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:27.369661  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:27.369665  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:27.372458  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:27.372475  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:27.372480  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:27.372485  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:27 GMT
	I0526 21:24:27.372489  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:27.372494  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:27.372497  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:27.373241  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:27.373508  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:27.373520  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:27.373524  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:27.373528  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:27.375119  527485 round_trippers.go:448] Response Status: 200 OK in 1 milliseconds
	I0526 21:24:27.375133  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:27.375137  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:27.375140  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:27.375143  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:27.375146  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:27.375149  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:27 GMT
	I0526 21:24:27.375842  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:27.376141  527485 pod_ready.go:102] pod "etcd-multinode-20210526212238-510955" in "kube-system" namespace has status "Ready":"False"
	I0526 21:24:27.869471  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:27.869489  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:27.869495  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:27.869499  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:27.873711  527485 round_trippers.go:448] Response Status: 200 OK in 4 milliseconds
	I0526 21:24:27.873729  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:27.873734  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:27.873739  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:27.873743  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:27.873747  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:27.873752  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:27 GMT
	I0526 21:24:27.874308  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:27.874659  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:27.874677  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:27.874684  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:27.874691  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:27.876698  527485 round_trippers.go:448] Response Status: 200 OK in 1 milliseconds
	I0526 21:24:27.876711  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:27.876715  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:27.876719  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:27.876722  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:27.876726  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:27 GMT
	I0526 21:24:27.876732  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:27.877258  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:28.370153  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:28.370175  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:28.370180  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:28.370184  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:28.372318  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:28.372335  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:28.372340  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:28.372345  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:28.372349  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:28.372353  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:28.372358  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:28 GMT
	I0526 21:24:28.372898  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:28.373242  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:28.373257  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:28.373263  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:28.373269  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:28.375852  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:28.375867  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:28.375872  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:28.375877  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:28.375881  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:28.375885  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:28.375889  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:28 GMT
	I0526 21:24:28.376172  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:28.870331  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:28.870351  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:28.870357  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:28.870362  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:28.873052  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:28.873069  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:28.873074  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:28.873078  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:28.873082  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:28.873086  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:28.873090  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:28 GMT
	I0526 21:24:28.873249  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:28.873500  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:28.873512  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:28.873517  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:28.873520  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:28.875519  527485 round_trippers.go:448] Response Status: 200 OK in 1 milliseconds
	I0526 21:24:28.875532  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:28.875536  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:28.875540  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:28 GMT
	I0526 21:24:28.875544  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:28.875549  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:28.875553  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:28.876082  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:29.369874  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:29.369893  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:29.369897  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:29.369901  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:29.374023  527485 round_trippers.go:448] Response Status: 200 OK in 4 milliseconds
	I0526 21:24:29.374033  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:29.374036  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:29 GMT
	I0526 21:24:29.374044  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:29.374049  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:29.374055  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:29.374059  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:29.374472  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:29.374716  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:29.374727  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:29.374731  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:29.374735  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:29.377890  527485 round_trippers.go:448] Response Status: 200 OK in 3 milliseconds
	I0526 21:24:29.377907  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:29.377912  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:29.377916  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:29.377920  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:29.377925  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:29.377930  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:29 GMT
	I0526 21:24:29.378108  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:29.378322  527485 pod_ready.go:102] pod "etcd-multinode-20210526212238-510955" in "kube-system" namespace has status "Ready":"False"
	I0526 21:24:29.870116  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:29.870167  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:29.870188  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:29.870207  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:29.873144  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:29.873160  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:29.873164  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:29.873167  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:29.873170  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:29 GMT
	I0526 21:24:29.873174  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:29.873179  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:29.873318  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:29.873662  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:29.873679  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:29.873687  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:29.873694  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:29.875723  527485 round_trippers.go:448] Response Status: 200 OK in 1 milliseconds
	I0526 21:24:29.875737  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:29.875743  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:29.875747  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:29.875752  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:29 GMT
	I0526 21:24:29.875756  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:29.875761  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:29.875865  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:30.369596  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:30.369649  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:30.369662  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:30.369672  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:30.372203  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:30.372219  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:30.372225  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:30.372231  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:30.372236  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:30 GMT
	I0526 21:24:30.372241  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:30.372246  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:30.372669  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:30.372942  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:30.372957  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:30.372963  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:30.372967  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:30.375226  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:30.375240  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:30.375245  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:30.375250  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:30.375254  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:30.375259  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:30 GMT
	I0526 21:24:30.375264  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:30.375588  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:30.870433  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:30.870461  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:30.870466  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:30.870469  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:30.872968  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:30.872982  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:30.872986  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:30.872990  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:30.872993  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:30.872995  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:30.873004  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:30 GMT
	I0526 21:24:30.873522  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:30.873819  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:30.873835  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:30.873843  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:30.873848  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:30.876515  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:30.876530  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:30.876535  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:30.876540  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:30.876545  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:30.876549  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:30.876554  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:30 GMT
	I0526 21:24:30.876853  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:31.369531  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:31.369549  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:31.369555  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:31.369559  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:31.372512  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:31.372526  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:31.372531  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:31.372536  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:31.372540  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:31 GMT
	I0526 21:24:31.372543  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:31.372546  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:31.372908  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:31.373252  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:31.373281  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:31.373289  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:31.373295  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:31.375719  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:31.375730  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:31.375734  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:31.375737  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:31.375740  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:31.375742  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:31.375745  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:31 GMT
	I0526 21:24:31.375973  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:31.869861  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:31.869883  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:31.869888  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:31.869893  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:31.872682  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:31.872698  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:31.872702  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:31.872706  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:31.872711  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:31.872716  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:31 GMT
	I0526 21:24:31.872720  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:31.872902  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:31.873170  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:31.873185  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:31.873191  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:31.873198  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:31.876043  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:31.876053  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:31.876056  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:31.876059  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:31.876062  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:31 GMT
	I0526 21:24:31.876065  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:31.876068  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:31.876544  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:31.876777  527485 pod_ready.go:102] pod "etcd-multinode-20210526212238-510955" in "kube-system" namespace has status "Ready":"False"
	I0526 21:24:32.370395  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:32.370421  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:32.370430  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:32.370436  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:32.372735  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:32.372748  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:32.372751  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:32.372755  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:32.372758  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:32.372761  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:32.372764  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:32 GMT
	I0526 21:24:32.372983  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:32.373214  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:32.373226  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:32.373231  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:32.373234  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:32.375725  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:32.375736  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:32.375740  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:32.375743  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:32.375746  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:32.375749  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:32.375752  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:32 GMT
	I0526 21:24:32.376038  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:32.869828  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:32.869857  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:32.869864  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:32.869870  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:32.873185  527485 round_trippers.go:448] Response Status: 200 OK in 3 milliseconds
	I0526 21:24:32.873204  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:32.873212  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:32.873217  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:32.873221  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:32.873226  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:32.873230  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:32 GMT
	I0526 21:24:32.873380  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:32.873698  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:32.873712  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:32.873717  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:32.873720  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:32.876823  527485 round_trippers.go:448] Response Status: 200 OK in 3 milliseconds
	I0526 21:24:32.876836  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:32.876841  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:32 GMT
	I0526 21:24:32.876846  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:32.876850  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:32.876857  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:32.876876  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:32.876995  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:33.370489  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:33.370512  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:33.370517  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:33.370522  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:33.373572  527485 round_trippers.go:448] Response Status: 200 OK in 3 milliseconds
	I0526 21:24:33.373589  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:33.373595  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:33.373600  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:33.373606  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:33.373610  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:33.373615  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:33 GMT
	I0526 21:24:33.374176  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:33.374471  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:33.374487  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:33.374494  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:33.374501  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:33.376562  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:33.376575  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:33.376579  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:33.376582  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:33.376587  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:33 GMT
	I0526 21:24:33.376590  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:33.376596  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:33.376749  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:33.869862  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:33.869900  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:33.869916  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:33.869927  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:33.872662  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:33.872678  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:33.872683  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:33.872688  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:33.872692  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:33 GMT
	I0526 21:24:33.872696  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:33.872700  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:33.873107  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:33.873440  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:33.873457  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:33.873464  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:33.873469  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:33.875563  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:33.875579  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:33.875585  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:33.875589  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:33.875594  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:33.875598  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:33.875602  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:33 GMT
	I0526 21:24:33.876177  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:34.370143  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:34.370164  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:34.370169  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:34.370173  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:34.373134  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:34.373150  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:34.373154  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:34.373158  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:34 GMT
	I0526 21:24:34.373161  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:34.373164  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:34.373166  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:34.373651  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:34.373911  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:34.373942  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:34.373947  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:34.373950  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:34.376299  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:34.376311  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:34.376314  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:34.376317  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:34.376323  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:34.376326  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:34 GMT
	I0526 21:24:34.376329  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:34.376840  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:34.377080  527485 pod_ready.go:102] pod "etcd-multinode-20210526212238-510955" in "kube-system" namespace has status "Ready":"False"
	I0526 21:24:34.869642  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:34.869684  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:34.869698  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:34.869709  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:34.872541  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:34.872554  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:34.872559  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:34 GMT
	I0526 21:24:34.872564  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:34.872568  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:34.872572  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:34.872575  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:34.873173  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:34.873439  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:34.873450  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:34.873455  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:34.873458  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:34.875815  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:34.875829  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:34.875835  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:34.875839  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:34.875842  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:34.875845  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:34 GMT
	I0526 21:24:34.875848  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:34.876202  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:35.370126  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:35.370145  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:35.370150  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:35.370154  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:35.372099  527485 round_trippers.go:448] Response Status: 200 OK in 1 milliseconds
	I0526 21:24:35.372110  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:35.372113  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:35.372117  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:35 GMT
	I0526 21:24:35.372119  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:35.372123  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:35.372125  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:35.372267  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:35.372529  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:35.372540  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:35.372546  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:35.372550  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:35.374169  527485 round_trippers.go:448] Response Status: 200 OK in 1 milliseconds
	I0526 21:24:35.374185  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:35.374189  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:35.374192  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:35.374195  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:35.374199  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:35.374201  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:35 GMT
	I0526 21:24:35.374360  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:35.870255  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:35.870276  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:35.870281  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:35.870285  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:35.872694  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:35.872708  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:35.872711  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:35.872715  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:35.872718  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:35 GMT
	I0526 21:24:35.872721  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:35.872726  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:35.872951  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:35.873202  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:35.873214  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:35.873218  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:35.873222  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:35.875245  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:35.875258  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:35.875262  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:35.875265  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:35 GMT
	I0526 21:24:35.875270  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:35.875273  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:35.875276  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:35.875589  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:36.370425  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:36.370443  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:36.370448  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:36.370452  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:36.373415  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:36.373431  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:36.373436  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:36.373441  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:36.373445  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:36.373449  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:36.373456  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:36 GMT
	I0526 21:24:36.373763  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:36.374086  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:36.374101  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:36.374106  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:36.374110  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:36.376742  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:36.376756  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:36.376762  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:36.376767  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:36.376771  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:36.376774  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:36 GMT
	I0526 21:24:36.376779  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:36.377046  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:36.377377  527485 pod_ready.go:102] pod "etcd-multinode-20210526212238-510955" in "kube-system" namespace has status "Ready":"False"
	I0526 21:24:36.870042  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:36.870089  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:36.870115  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:36.870121  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:36.872966  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:36.872986  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:36.872992  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:36.872997  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:36.873003  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:36.873012  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:36 GMT
	I0526 21:24:36.873017  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:36.873194  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:36.873497  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:36.873511  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:36.873516  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:36.873521  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:36.876710  527485 round_trippers.go:448] Response Status: 200 OK in 3 milliseconds
	I0526 21:24:36.876723  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:36.876728  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:36.876733  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:36.876737  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:36.876742  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:36.876746  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:36 GMT
	I0526 21:24:36.876888  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:37.369587  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:37.369633  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:37.369646  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:37.369657  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:37.372597  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:37.372617  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:37.372623  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:37.372628  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:37.372632  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:37.372636  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:37.372640  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:37 GMT
	I0526 21:24:37.373174  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:37.373512  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:37.373527  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:37.373532  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:37.373536  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:37.375706  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:37.375718  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:37.375723  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:37.375728  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:37 GMT
	I0526 21:24:37.375732  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:37.375736  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:37.375744  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:37.375897  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:37.869633  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:37.869671  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:37.869684  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:37.869695  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:37.872187  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:37.872200  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:37.872206  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:37.872211  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:37.872216  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:37.872220  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:37.872225  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:37 GMT
	I0526 21:24:37.872350  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:37.872620  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:37.872634  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:37.872640  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:37.872646  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:37.874420  527485 round_trippers.go:448] Response Status: 200 OK in 1 milliseconds
	I0526 21:24:37.874439  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:37.874443  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:37.874446  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:37.874449  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:37 GMT
	I0526 21:24:37.874452  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:37.874455  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:37.874529  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:38.370303  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:38.370323  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:38.370331  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:38.370337  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:38.373247  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:38.373268  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:38.373272  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:38.373276  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:38.373280  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:38.373285  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:38.373290  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:38 GMT
	I0526 21:24:38.374009  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:38.374298  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:38.374311  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:38.374315  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:38.374319  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:38.377694  527485 round_trippers.go:448] Response Status: 200 OK in 3 milliseconds
	I0526 21:24:38.377708  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:38.377712  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:38.377716  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:38.377718  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:38.377721  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:38.377724  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:38 GMT
	I0526 21:24:38.378369  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:38.378614  527485 pod_ready.go:102] pod "etcd-multinode-20210526212238-510955" in "kube-system" namespace has status "Ready":"False"
	I0526 21:24:38.869685  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:38.869736  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:38.869757  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:38.869795  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:38.872326  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:38.872346  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:38.872352  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:38.872359  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:38.872363  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:38.872368  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:38 GMT
	I0526 21:24:38.872373  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:38.872506  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:38.872761  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:38.872773  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:38.872777  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:38.872781  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:38.874795  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:38.874807  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:38.874810  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:38.874813  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:38.874816  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:38.874819  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:38.874822  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:38 GMT
	I0526 21:24:38.875023  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:39.370206  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:39.370223  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:39.370228  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:39.370232  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:39.373435  527485 round_trippers.go:448] Response Status: 200 OK in 3 milliseconds
	I0526 21:24:39.373456  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:39.373463  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:39.373469  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:39.373474  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:39.373480  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:39 GMT
	I0526 21:24:39.373486  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:39.373968  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:39.374284  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:39.374302  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:39.374310  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:39.374319  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:39.379949  527485 round_trippers.go:448] Response Status: 200 OK in 5 milliseconds
	I0526 21:24:39.379964  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:39.379969  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:39 GMT
	I0526 21:24:39.379973  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:39.379980  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:39.379985  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:39.379990  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:39.380108  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:39.869983  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:39.870006  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:39.870013  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:39.870019  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:39.872924  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:39.872940  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:39.872946  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:39.872950  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:39.872955  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:39.872959  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:39.872963  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:39 GMT
	I0526 21:24:39.873177  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:39.873518  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:39.873535  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:39.873539  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:39.873544  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:39.875319  527485 round_trippers.go:448] Response Status: 200 OK in 1 milliseconds
	I0526 21:24:39.875330  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:39.875335  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:39.875339  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:39.875343  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:39.875348  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:39.875352  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:39 GMT
	I0526 21:24:39.875704  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:40.370463  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:40.370482  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:40.370486  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:40.370490  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:40.373544  527485 round_trippers.go:448] Response Status: 200 OK in 3 milliseconds
	I0526 21:24:40.373564  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:40.373569  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:40 GMT
	I0526 21:24:40.373574  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:40.373578  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:40.373582  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:40.373587  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:40.373943  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:40.374194  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:40.374207  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:40.374213  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:40.374218  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:40.376513  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:40.376526  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:40.376529  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:40 GMT
	I0526 21:24:40.376532  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:40.376535  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:40.376541  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:40.376544  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:40.376846  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:40.869472  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:40.869498  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:40.869506  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:40.869511  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:40.871856  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:40.871872  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:40.871877  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:40.871880  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:40.871883  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:40.871887  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:40 GMT
	I0526 21:24:40.871890  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:40.872056  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:40.872437  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:40.872458  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:40.872466  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:40.872472  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:40.874785  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:40.874800  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:40.874803  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:40.874806  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:40.874809  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:40.874812  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:40.874815  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:40 GMT
	I0526 21:24:40.875043  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:40.875292  527485 pod_ready.go:102] pod "etcd-multinode-20210526212238-510955" in "kube-system" namespace has status "Ready":"False"
	I0526 21:24:41.369964  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:41.369983  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:41.369988  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:41.369992  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:41.372469  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:41.372483  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:41.372487  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:41 GMT
	I0526 21:24:41.372490  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:41.372493  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:41.372496  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:41.372499  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:41.372974  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:41.373266  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:41.373284  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:41.373291  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:41.373297  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:41.375385  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:41.375397  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:41.375402  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:41.375407  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:41.375412  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:41.375417  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:41 GMT
	I0526 21:24:41.375421  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:41.375772  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:41.869535  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:41.869575  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:41.869587  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:41.869597  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:41.872438  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:41.872458  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:41.872464  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:41.872469  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:41.872474  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:41.872478  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:41.872481  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:41 GMT
	I0526 21:24:41.872804  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:41.873066  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:41.873078  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:41.873083  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:41.873087  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:41.875071  527485 round_trippers.go:448] Response Status: 200 OK in 1 milliseconds
	I0526 21:24:41.875083  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:41.875087  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:41.875090  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:41.875093  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:41.875096  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:41.875100  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:41 GMT
	I0526 21:24:41.875215  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:42.370081  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:42.370099  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:42.370104  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:42.370109  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:42.372230  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:42.372246  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:42.372251  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:42.372256  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:42.372260  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:42.372265  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:42.372271  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:42 GMT
	I0526 21:24:42.372403  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:42.372734  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:42.372748  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:42.372753  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:42.372759  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:42.375284  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:42.375294  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:42.375299  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:42.375303  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:42.375307  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:42.375312  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:42.375316  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:42 GMT
	I0526 21:24:42.375554  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:42.870439  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:42.870468  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:42.870475  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:42.870482  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:42.873403  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:42.873426  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:42.873432  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:42.873436  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:42.873439  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:42.873442  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:42 GMT
	I0526 21:24:42.873445  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:42.873520  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:42.873807  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:42.873818  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:42.873824  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:42.873828  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:42.877344  527485 round_trippers.go:448] Response Status: 200 OK in 3 milliseconds
	I0526 21:24:42.877361  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:42.877365  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:42.877369  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:42.877374  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:42.877378  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:42.877383  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:42 GMT
	I0526 21:24:42.877614  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:42.877849  527485 pod_ready.go:102] pod "etcd-multinode-20210526212238-510955" in "kube-system" namespace has status "Ready":"False"
	I0526 21:24:43.370448  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:43.370472  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:43.370477  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:43.370483  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:43.373051  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:43.373070  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:43.373076  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:43.373082  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:43.373087  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:43 GMT
	I0526 21:24:43.373092  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:43.373097  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:43.373240  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:43.373530  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:43.373542  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:43.373547  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:43.373551  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:43.376311  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:43.376325  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:43.376329  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:43.376332  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:43.376335  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:43.376338  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:43.376341  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:43 GMT
	I0526 21:24:43.377168  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:43.870182  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:43.870199  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:43.870204  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:43.870208  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:43.872727  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:43.872743  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:43.872748  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:43.872753  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:43.872757  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:43.872762  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:43 GMT
	I0526 21:24:43.872766  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:43.873209  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:43.873569  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:43.873587  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:43.873593  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:43.873599  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:43.875540  527485 round_trippers.go:448] Response Status: 200 OK in 1 milliseconds
	I0526 21:24:43.875562  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:43.875570  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:43.875577  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:43.875583  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:43 GMT
	I0526 21:24:43.875591  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:43.875597  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:43.876010  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:44.370170  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:44.370194  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:44.370204  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:44.370213  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:44.372274  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:44.372295  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:44.372301  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:44.372307  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:44.372312  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:44.372327  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:44 GMT
	I0526 21:24:44.372332  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:44.372557  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:44.372950  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:44.372971  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:44.372978  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:44.372985  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:44.375148  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:44.375161  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:44.375165  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:44.375169  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:44.375171  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:44.375175  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:44.375179  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:44 GMT
	I0526 21:24:44.375436  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:44.870310  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:44.870328  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:44.870333  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:44.870337  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:44.872765  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:44.872786  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:44.872790  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:44.872794  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:44.872796  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:44.872804  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:44.872811  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:44 GMT
	I0526 21:24:44.873273  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:44.873564  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:44.873578  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:44.873582  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:44.873586  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:44.875549  527485 round_trippers.go:448] Response Status: 200 OK in 1 milliseconds
	I0526 21:24:44.875568  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:44.875573  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:44.875578  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:44.875582  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:44.875586  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:44 GMT
	I0526 21:24:44.875590  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:44.875772  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:45.369580  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:45.369630  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:45.369648  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:45.369659  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:45.371522  527485 round_trippers.go:448] Response Status: 200 OK in 1 milliseconds
	I0526 21:24:45.371539  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:45.371545  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:45.371549  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:45.371554  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:45.371558  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:45.371562  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:45 GMT
	I0526 21:24:45.372100  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:45.372407  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:45.372421  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:45.372426  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:45.372430  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:45.374935  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:45.374946  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:45.374950  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:45.374953  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:45 GMT
	I0526 21:24:45.374956  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:45.374959  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:45.374964  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:45.375188  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:45.375469  527485 pod_ready.go:102] pod "etcd-multinode-20210526212238-510955" in "kube-system" namespace has status "Ready":"False"
	I0526 21:24:45.870034  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:45.870070  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:45.870083  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:45.870093  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:45.873158  527485 round_trippers.go:448] Response Status: 200 OK in 3 milliseconds
	I0526 21:24:45.873174  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:45.873178  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:45.873181  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:45.873185  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:45 GMT
	I0526 21:24:45.873187  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:45.873190  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:45.873812  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:45.874126  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:45.874148  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:45.874155  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:45.874162  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:45.877154  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:45.877165  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:45.877169  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:45.877172  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:45.877175  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:45.877178  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:45.877181  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:45 GMT
	I0526 21:24:45.878219  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:46.370167  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:46.370190  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:46.370195  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:46.370199  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:46.372766  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:46.372782  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:46.372787  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:46.372791  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:46.372795  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:46.372799  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:46.372804  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:46 GMT
	I0526 21:24:46.373263  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:46.373622  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:46.373640  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:46.373647  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:46.373652  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:46.375756  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:46.375775  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:46.375781  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:46.375787  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:46.375792  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:46.375799  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:46 GMT
	I0526 21:24:46.375804  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:46.376142  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:46.870043  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:46.870070  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:46.870077  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:46.870083  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:46.873256  527485 round_trippers.go:448] Response Status: 200 OK in 3 milliseconds
	I0526 21:24:46.873272  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:46.873276  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:46.873280  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:46 GMT
	I0526 21:24:46.873284  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:46.873292  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:46.873296  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:46.873597  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:46.873976  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:46.873994  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:46.874000  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:46.874015  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:46.876444  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:46.876463  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:46.876469  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:46.876475  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:46.876480  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:46 GMT
	I0526 21:24:46.876487  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:46.876493  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:46.877064  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:47.369945  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:47.369968  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:47.369974  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:47.369978  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:47.373201  527485 round_trippers.go:448] Response Status: 200 OK in 3 milliseconds
	I0526 21:24:47.373226  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:47.373232  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:47 GMT
	I0526 21:24:47.373238  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:47.373243  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:47.373251  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:47.373256  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:47.373422  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:47.373726  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:47.373739  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:47.373744  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:47.373749  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:47.375980  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:47.375990  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:47.375994  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:47 GMT
	I0526 21:24:47.375997  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:47.376003  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:47.376011  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:47.376015  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:47.376269  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:47.376559  527485 pod_ready.go:102] pod "etcd-multinode-20210526212238-510955" in "kube-system" namespace has status "Ready":"False"
	I0526 21:24:47.870217  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:47.870237  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:47.870243  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:47.870247  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:47.872660  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:47.872674  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:47.872678  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:47.872681  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:47.872684  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:47.872687  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:47.872690  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:47 GMT
	I0526 21:24:47.873083  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:47.873358  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:47.873371  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:47.873376  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:47.873380  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:47.875100  527485 round_trippers.go:448] Response Status: 200 OK in 1 milliseconds
	I0526 21:24:47.875116  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:47.875121  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:47.875126  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:47.875130  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:47.875134  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:47.875138  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:47 GMT
	I0526 21:24:47.875353  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:48.370168  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:48.370187  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:48.370192  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:48.370196  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:48.372506  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:48.372523  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:48.372528  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:48.372533  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:48.372537  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:48 GMT
	I0526 21:24:48.372541  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:48.372546  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:48.373050  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:48.373304  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:48.373316  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:48.373321  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:48.373325  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:48.375682  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:48.375700  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:48.375711  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:48 GMT
	I0526 21:24:48.375716  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:48.375722  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:48.375727  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:48.375735  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:48.376241  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:48.870296  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:48.870316  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:48.870321  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:48.870325  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:48.872705  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:48.872723  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:48.872728  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:48.872733  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:48.872738  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:48 GMT
	I0526 21:24:48.872742  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:48.872746  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:48.873355  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:48.873628  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:48.873641  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:48.873645  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:48.873649  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:48.875918  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:48.875933  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:48.875937  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:48.875940  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:48 GMT
	I0526 21:24:48.875943  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:48.875947  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:48.875950  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:48.876202  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:49.370589  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:49.370632  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:49.370646  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:49.370656  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:49.373143  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:49.373157  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:49.373161  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:49.373164  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:49 GMT
	I0526 21:24:49.373167  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:49.373175  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:49.373183  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:49.373581  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:49.373924  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:49.373942  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:49.373946  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:49.373950  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:49.376597  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:49.376614  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:49.376619  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:49.376624  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:49.376628  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:49 GMT
	I0526 21:24:49.376632  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:49.376636  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:49.376819  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:49.377139  527485 pod_ready.go:102] pod "etcd-multinode-20210526212238-510955" in "kube-system" namespace has status "Ready":"False"
	I0526 21:24:49.869582  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:49.869619  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:49.869631  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:49.869641  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:49.871788  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:49.871805  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:49.871810  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:49.871815  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:49.871818  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:49.871823  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:49.871827  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:49 GMT
	I0526 21:24:49.872253  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:49.872569  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:49.872585  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:49.872590  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:49.872594  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:49.874676  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:49.874693  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:49.874699  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:49.874765  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:49.874784  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:49.874789  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:49.874794  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:49 GMT
	I0526 21:24:49.874972  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:50.369724  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:50.369759  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:50.369796  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:50.369819  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:50.372326  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:50.372343  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:50.372349  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:50.372354  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:50.372358  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:50 GMT
	I0526 21:24:50.372364  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:50.372368  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:50.372818  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:50.373172  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:50.373190  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:50.373197  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:50.373203  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:50.375319  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:50.375334  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:50.375338  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:50.375341  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:50.375344  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:50 GMT
	I0526 21:24:50.375347  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:50.375350  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:50.375843  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:50.869656  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:50.869676  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:50.869682  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:50.869686  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:50.872107  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:50.872124  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:50.872128  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:50.872131  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:50.872134  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:50 GMT
	I0526 21:24:50.872140  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:50.872143  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:50.872530  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:50.872806  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:50.872817  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:50.872822  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:50.872826  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:50.876568  527485 round_trippers.go:448] Response Status: 200 OK in 3 milliseconds
	I0526 21:24:50.876583  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:50.876588  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:50 GMT
	I0526 21:24:50.876593  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:50.876597  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:50.876601  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:50.876605  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:50.876759  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:51.369513  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:51.369551  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:51.369564  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:51.369575  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:51.372619  527485 round_trippers.go:448] Response Status: 200 OK in 3 milliseconds
	I0526 21:24:51.372633  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:51.372637  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:51.372640  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:51.372643  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:51 GMT
	I0526 21:24:51.372646  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:51.372649  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:51.373265  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:51.373536  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:51.373548  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:51.373553  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:51.373557  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:51.375420  527485 round_trippers.go:448] Response Status: 200 OK in 1 milliseconds
	I0526 21:24:51.375435  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:51.375440  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:51.375445  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:51.375449  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:51.375456  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:51.375461  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:51 GMT
	I0526 21:24:51.375981  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:51.869622  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:51.869660  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:51.869672  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:51.869683  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:51.873134  527485 round_trippers.go:448] Response Status: 200 OK in 3 milliseconds
	I0526 21:24:51.873150  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:51.873155  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:51.873160  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:51.873165  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:51.873170  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:51.873174  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:51 GMT
	I0526 21:24:51.873380  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:51.873674  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:51.873689  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:51.873693  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:51.873697  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:51.875957  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:51.875972  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:51.875977  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:51.875981  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:51.875989  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:51.875995  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:51.876000  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:51 GMT
	I0526 21:24:51.876538  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:51.876815  527485 pod_ready.go:102] pod "etcd-multinode-20210526212238-510955" in "kube-system" namespace has status "Ready":"False"
	I0526 21:24:52.369840  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:52.369869  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:52.369875  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:52.369879  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:52.374720  527485 round_trippers.go:448] Response Status: 200 OK in 4 milliseconds
	I0526 21:24:52.374742  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:52.374748  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:52.374752  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:52.374757  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:52 GMT
	I0526 21:24:52.374760  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:52.374764  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:52.375122  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:52.375478  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:52.375493  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:52.375498  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:52.375502  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:52.378002  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:52.378019  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:52.378024  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:52.378030  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:52.378034  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:52.378039  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:52.378045  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:52 GMT
	I0526 21:24:52.378319  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:52.870288  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:52.870324  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:52.870331  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:52.870335  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:52.873645  527485 round_trippers.go:448] Response Status: 200 OK in 3 milliseconds
	I0526 21:24:52.873663  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:52.873667  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:52.873671  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:52 GMT
	I0526 21:24:52.873674  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:52.873677  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:52.873686  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:52.873994  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:52.874295  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:52.874307  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:52.874311  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:52.874315  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:52.876474  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:52.876487  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:52.876491  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:52.876494  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:52.876497  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:52.876500  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:52.876503  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:52 GMT
	I0526 21:24:52.877108  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:53.370107  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:53.370133  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:53.370138  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:53.370142  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:53.374368  527485 round_trippers.go:448] Response Status: 200 OK in 4 milliseconds
	I0526 21:24:53.374384  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:53.374389  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:53 GMT
	I0526 21:24:53.374393  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:53.374398  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:53.374403  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:53.374407  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:53.374670  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:53.375074  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:53.375095  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:53.375103  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:53.375109  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:53.378060  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:53.378072  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:53.378075  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:53.378079  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:53.378086  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:53.378090  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:53 GMT
	I0526 21:24:53.378094  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:53.378417  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:53.869515  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:53.869551  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:53.869556  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:53.869560  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:53.873301  527485 round_trippers.go:448] Response Status: 200 OK in 3 milliseconds
	I0526 21:24:53.873322  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:53.873326  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:53.873329  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:53.873332  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:53 GMT
	I0526 21:24:53.873341  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:53.873346  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:53.873912  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:53.874268  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:53.874288  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:53.874295  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:53.874303  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:53.877375  527485 round_trippers.go:448] Response Status: 200 OK in 3 milliseconds
	I0526 21:24:53.877389  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:53.877396  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:53.877401  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:53.877406  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:53.877410  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:53.877416  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:53 GMT
	I0526 21:24:53.877760  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:53.878008  527485 pod_ready.go:102] pod "etcd-multinode-20210526212238-510955" in "kube-system" namespace has status "Ready":"False"
	I0526 21:24:54.369919  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:54.369943  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:54.369948  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:54.369952  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:54.372996  527485 round_trippers.go:448] Response Status: 200 OK in 3 milliseconds
	I0526 21:24:54.373013  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:54.373017  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:54.373020  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:54.373023  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:54.373026  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:54.373029  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:54 GMT
	I0526 21:24:54.374031  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:54.374332  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:54.374343  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:54.374347  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:54.374351  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:54.377425  527485 round_trippers.go:448] Response Status: 200 OK in 3 milliseconds
	I0526 21:24:54.377442  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:54.377448  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:54.377453  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:54.377458  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:54.377462  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:54 GMT
	I0526 21:24:54.377466  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:54.378623  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:54.869560  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:54.869607  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:54.869621  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:54.869631  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:54.872295  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:54.872314  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:54.872318  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:54.872321  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:54.872324  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:54.872327  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:54.872330  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:54 GMT
	I0526 21:24:54.872461  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:54.872735  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:54.872746  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:54.872752  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:54.872757  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:54.875249  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:54.875268  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:54.875273  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:54.875277  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:54.875286  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:54.875289  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:54.875293  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:54 GMT
	I0526 21:24:54.875525  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:55.370352  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:55.370376  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:55.370381  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:55.370385  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:55.373460  527485 round_trippers.go:448] Response Status: 200 OK in 3 milliseconds
	I0526 21:24:55.373478  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:55.373483  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:55.373488  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:55.373492  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:55.373497  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:55.373501  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:55 GMT
	I0526 21:24:55.374254  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:55.374605  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:55.374620  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:55.374627  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:55.374631  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:55.376626  527485 round_trippers.go:448] Response Status: 200 OK in 1 milliseconds
	I0526 21:24:55.376640  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:55.376644  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:55.376648  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:55.376651  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:55.376654  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:55.376657  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:55 GMT
	I0526 21:24:55.376786  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:55.869605  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:55.869648  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:55.869660  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:55.869671  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:55.872649  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:55.872666  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:55.872671  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:55.872675  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:55.872680  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:55.872685  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:55.872688  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:55 GMT
	I0526 21:24:55.873379  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:55.873738  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:55.873752  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:55.873757  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:55.873761  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:55.875854  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:55.875871  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:55.875876  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:55.875881  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:55 GMT
	I0526 21:24:55.875885  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:55.875889  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:55.875893  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:55.876183  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:56.370096  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:56.370113  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:56.370118  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:56.370122  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:56.372239  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:56.372257  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:56.372262  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:56.372267  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:56.372272  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:56.372276  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:56 GMT
	I0526 21:24:56.372281  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:56.372389  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:56.372679  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:56.372692  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:56.372696  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:56.372700  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:56.374500  527485 round_trippers.go:448] Response Status: 200 OK in 1 milliseconds
	I0526 21:24:56.374512  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:56.374517  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:56.374521  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:56.374525  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:56.374530  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:56.374534  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:56 GMT
	I0526 21:24:56.374863  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:56.375148  527485 pod_ready.go:102] pod "etcd-multinode-20210526212238-510955" in "kube-system" namespace has status "Ready":"False"
	I0526 21:24:56.869657  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:56.869706  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:56.869720  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:56.869731  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:56.872516  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:56.872534  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:56.872539  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:56.872544  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:56.872548  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:56.872552  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:56.872557  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:56 GMT
	I0526 21:24:56.873735  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:56.874028  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:56.874040  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:56.874045  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:56.874049  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:56.876404  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:56.876422  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:56.876429  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:56 GMT
	I0526 21:24:56.876435  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:56.876440  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:56.876445  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:56.876448  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:56.876737  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:57.369571  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:57.369620  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:57.369639  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:57.369656  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:57.372850  527485 round_trippers.go:448] Response Status: 200 OK in 3 milliseconds
	I0526 21:24:57.373234  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:57.373251  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:57.373254  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:57.373257  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:57.373261  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:57.373264  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:57 GMT
	I0526 21:24:57.373352  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:57.373631  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:57.373643  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:57.373648  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:57.373651  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:57.376340  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:57.376351  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:57.376354  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:57.376357  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:57.376360  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:57.376363  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:57.376366  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:57 GMT
	I0526 21:24:57.376700  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:57.869527  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:57.869565  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:57.869581  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:57.869592  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:57.872622  527485 round_trippers.go:448] Response Status: 200 OK in 3 milliseconds
	I0526 21:24:57.872636  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:57.872641  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:57.872646  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:57.872651  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:57 GMT
	I0526 21:24:57.872656  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:57.872661  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:57.872727  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:57.872996  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:57.873007  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:57.873012  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:57.873016  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:57.875048  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:57.875061  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:57.875065  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:57.875068  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:57.875071  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:57 GMT
	I0526 21:24:57.875074  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:57.875077  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:57.875468  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:58.370322  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:58.370341  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:58.370346  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:58.370350  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:58.372221  527485 round_trippers.go:448] Response Status: 200 OK in 1 milliseconds
	I0526 21:24:58.372235  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:58.372241  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:58.372244  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:58.372247  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:58.372251  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:58 GMT
	I0526 21:24:58.372255  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:58.372549  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:58.372894  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:58.372909  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:58.372914  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:58.372919  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:58.374787  527485 round_trippers.go:448] Response Status: 200 OK in 1 milliseconds
	I0526 21:24:58.374802  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:58.374807  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:58.374812  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:58.374817  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:58.374821  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:58.374825  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:58 GMT
	I0526 21:24:58.374993  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:58.375292  527485 pod_ready.go:102] pod "etcd-multinode-20210526212238-510955" in "kube-system" namespace has status "Ready":"False"
	I0526 21:24:58.870234  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:58.870270  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:58.870284  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:58.870294  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:58.872849  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:58.872881  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:58.872888  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:58.872895  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:58.872901  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:58 GMT
	I0526 21:24:58.872906  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:58.872911  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:58.873569  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:58.873888  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:58.873904  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:58.873909  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:58.873913  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:58.876002  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:58.876017  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:58.876021  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:58.876024  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:58.876027  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:58.876030  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:58.876034  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:58 GMT
	I0526 21:24:58.876299  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:59.369490  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:59.369527  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:59.369542  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:59.369553  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:59.372897  527485 round_trippers.go:448] Response Status: 200 OK in 3 milliseconds
	I0526 21:24:59.372918  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:59.372925  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:59.372931  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:59.372937  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:59.372941  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:59.372947  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:59 GMT
	I0526 21:24:59.373414  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:59.373741  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:59.373759  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:59.373764  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:59.373768  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:59.375613  527485 round_trippers.go:448] Response Status: 200 OK in 1 milliseconds
	I0526 21:24:59.375625  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:59.375628  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:59.375632  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:59.375635  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:59.375638  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:59.375641  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:59 GMT
	I0526 21:24:59.376179  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:24:59.870070  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:24:59.870091  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:59.870097  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:59.870103  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:59.873105  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:59.873121  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:59.873126  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:59.873129  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:59.873132  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:59.873135  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:59.873139  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:59 GMT
	I0526 21:24:59.873512  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:24:59.873779  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:24:59.873793  527485 round_trippers.go:429] Request Headers:
	I0526 21:24:59.873798  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:24:59.873802  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:24:59.876684  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:24:59.876699  527485 round_trippers.go:451] Response Headers:
	I0526 21:24:59.876704  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:24:59.876710  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:24:59.876715  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:24:59.876721  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:24:59 GMT
	I0526 21:24:59.876726  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:24:59.877192  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:25:00.370068  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:25:00.370088  527485 round_trippers.go:429] Request Headers:
	I0526 21:25:00.370092  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:25:00.370096  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:25:00.372679  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:25:00.372695  527485 round_trippers.go:451] Response Headers:
	I0526 21:25:00.372699  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:25:00.372702  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:25:00.372705  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:25:00 GMT
	I0526 21:25:00.372708  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:25:00.372711  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:25:00.373201  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:25:00.373448  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:25:00.373459  527485 round_trippers.go:429] Request Headers:
	I0526 21:25:00.373463  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:25:00.373467  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:25:00.376419  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:25:00.376433  527485 round_trippers.go:451] Response Headers:
	I0526 21:25:00.376438  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:25:00 GMT
	I0526 21:25:00.376442  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:25:00.376446  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:25:00.376450  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:25:00.376454  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:25:00.376597  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:25:00.376885  527485 pod_ready.go:102] pod "etcd-multinode-20210526212238-510955" in "kube-system" namespace has status "Ready":"False"
	I0526 21:25:00.870428  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:25:00.870447  527485 round_trippers.go:429] Request Headers:
	I0526 21:25:00.870452  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:25:00.870456  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:25:00.872526  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:25:00.872544  527485 round_trippers.go:451] Response Headers:
	I0526 21:25:00.872549  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:25:00.872554  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:25:00.872558  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:25:00.872562  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:25:00.872567  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:25:00 GMT
	I0526 21:25:00.873108  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:25:00.873421  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:25:00.873435  527485 round_trippers.go:429] Request Headers:
	I0526 21:25:00.873440  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:25:00.873443  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:25:00.875303  527485 round_trippers.go:448] Response Status: 200 OK in 1 milliseconds
	I0526 21:25:00.875318  527485 round_trippers.go:451] Response Headers:
	I0526 21:25:00.875324  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:25:00.875328  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:25:00.875333  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:25:00.875337  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:25:00 GMT
	I0526 21:25:00.875344  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:25:00.875615  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:25:01.370455  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:25:01.370475  527485 round_trippers.go:429] Request Headers:
	I0526 21:25:01.370479  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:25:01.370483  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:25:01.372996  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:25:01.373010  527485 round_trippers.go:451] Response Headers:
	I0526 21:25:01.373019  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:25:01.373025  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:25:01.373029  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:25:01.373035  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:25:01 GMT
	I0526 21:25:01.373042  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:25:01.373283  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:25:01.373600  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:25:01.373613  527485 round_trippers.go:429] Request Headers:
	I0526 21:25:01.373618  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:25:01.373621  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:25:01.375308  527485 round_trippers.go:448] Response Status: 200 OK in 1 milliseconds
	I0526 21:25:01.375325  527485 round_trippers.go:451] Response Headers:
	I0526 21:25:01.375334  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:25:01.375338  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:25:01.375343  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:25:01.375347  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:25:01.375352  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:25:01 GMT
	I0526 21:25:01.375561  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:25:01.870334  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:25:01.870351  527485 round_trippers.go:429] Request Headers:
	I0526 21:25:01.870356  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:25:01.870360  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:25:01.874653  527485 round_trippers.go:448] Response Status: 200 OK in 4 milliseconds
	I0526 21:25:01.874674  527485 round_trippers.go:451] Response Headers:
	I0526 21:25:01.874682  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:25:01.874687  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:25:01.874691  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:25:01 GMT
	I0526 21:25:01.874695  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:25:01.874699  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:25:01.874967  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"342","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5865 chars]
	I0526 21:25:01.875209  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:25:01.875220  527485 round_trippers.go:429] Request Headers:
	I0526 21:25:01.875224  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:25:01.875228  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:25:01.877718  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:25:01.877729  527485 round_trippers.go:451] Response Headers:
	I0526 21:25:01.877734  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:25:01.877739  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:25:01.877743  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:25:01.877748  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:25:01.877752  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:25:01 GMT
	I0526 21:25:01.878107  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:25:02.370099  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:25:02.370119  527485 round_trippers.go:429] Request Headers:
	I0526 21:25:02.370124  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:25:02.370129  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:25:02.372539  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:25:02.372556  527485 round_trippers.go:451] Response Headers:
	I0526 21:25:02.372561  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:25:02.372565  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:25:02.372570  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:25:02.372585  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:25:02.372589  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:25:02 GMT
	I0526 21:25:02.372677  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"539","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:25:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5642 chars]
	I0526 21:25:02.372945  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:25:02.372957  527485 round_trippers.go:429] Request Headers:
	I0526 21:25:02.372962  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:25:02.372965  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:25:02.374703  527485 round_trippers.go:448] Response Status: 200 OK in 1 milliseconds
	I0526 21:25:02.374719  527485 round_trippers.go:451] Response Headers:
	I0526 21:25:02.374724  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:25:02 GMT
	I0526 21:25:02.374729  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:25:02.374736  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:25:02.374739  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:25:02.374742  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:25:02.375182  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:25:02.375398  527485 pod_ready.go:92] pod "etcd-multinode-20210526212238-510955" in "kube-system" namespace has status "Ready":"True"
	I0526 21:25:02.375416  527485 pod_ready.go:81] duration metric: took 48.012392127s waiting for pod "etcd-multinode-20210526212238-510955" in "kube-system" namespace to be "Ready" ...
	I0526 21:25:02.375430  527485 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-20210526212238-510955" in "kube-system" namespace to be "Ready" ...
	I0526 21:25:02.375471  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-20210526212238-510955
	I0526 21:25:02.375481  527485 round_trippers.go:429] Request Headers:
	I0526 21:25:02.375487  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:25:02.375492  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:25:02.377329  527485 round_trippers.go:448] Response Status: 200 OK in 1 milliseconds
	I0526 21:25:02.377344  527485 round_trippers.go:451] Response Headers:
	I0526 21:25:02.377349  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:25:02.377353  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:25:02.377357  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:25:02.377361  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:25:02.377365  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:25:02 GMT
	I0526 21:25:02.377514  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-20210526212238-510955","namespace":"kube-system","uid":"5d446255-3487-4319-9b9f-2294a93fd226","resourceVersion":"447","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.229:8443","kubernetes.io/config.hash":"b42b6879229f245abab6047de8662a2f","kubernetes.io/config.mirror":"b42b6879229f245abab6047de8662a2f","kubernetes.io/config.seen":"2021-05-26T21:23:43.638984722Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:kubeadm.kubernetes.io/kube-apiserver.advertise-addr [truncated 7266 chars]
	I0526 21:25:02.377767  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:25:02.377780  527485 round_trippers.go:429] Request Headers:
	I0526 21:25:02.377786  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:25:02.377791  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:25:02.379941  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:25:02.379951  527485 round_trippers.go:451] Response Headers:
	I0526 21:25:02.379956  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:25:02.379960  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:25:02.379964  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:25:02.379968  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:25:02.379973  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:25:02 GMT
	I0526 21:25:02.380165  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:25:02.380384  527485 pod_ready.go:92] pod "kube-apiserver-multinode-20210526212238-510955" in "kube-system" namespace has status "Ready":"True"
	I0526 21:25:02.380396  527485 pod_ready.go:81] duration metric: took 4.954392ms waiting for pod "kube-apiserver-multinode-20210526212238-510955" in "kube-system" namespace to be "Ready" ...
	I0526 21:25:02.380405  527485 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-20210526212238-510955" in "kube-system" namespace to be "Ready" ...
	I0526 21:25:02.380442  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20210526212238-510955
	I0526 21:25:02.380450  527485 round_trippers.go:429] Request Headers:
	I0526 21:25:02.380454  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:25:02.380458  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:25:02.382393  527485 round_trippers.go:448] Response Status: 200 OK in 1 milliseconds
	I0526 21:25:02.382407  527485 round_trippers.go:451] Response Headers:
	I0526 21:25:02.382411  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:25:02.382416  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:25:02.382422  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:25:02.382426  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:25:02 GMT
	I0526 21:25:02.382432  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:25:02.382577  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-20210526212238-510955","namespace":"kube-system","uid":"ff663293-6f11-48e7-9409-1637114dc587","resourceVersion":"390","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"474c55dfb64741cc485e46b6bb9f2dc0","kubernetes.io/config.mirror":"474c55dfb64741cc485e46b6bb9f2dc0","kubernetes.io/config.seen":"2021-05-26T21:23:43.638987620Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con
fig.mirror":{},"f:kubernetes.io/config.seen":{},"f:kubernetes.io/config [truncated 7083 chars]
	I0526 21:25:02.382844  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:25:02.382858  527485 round_trippers.go:429] Request Headers:
	I0526 21:25:02.382864  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:25:02.382869  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:25:02.384623  527485 round_trippers.go:448] Response Status: 200 OK in 1 milliseconds
	I0526 21:25:02.384649  527485 round_trippers.go:451] Response Headers:
	I0526 21:25:02.384652  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:25:02.384655  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:25:02.384658  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:25:02.384661  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:25:02.384663  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:25:02 GMT
	I0526 21:25:02.385167  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:25:02.886040  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20210526212238-510955
	I0526 21:25:02.886066  527485 round_trippers.go:429] Request Headers:
	I0526 21:25:02.886070  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:25:02.886074  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:25:02.888279  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:25:02.888306  527485 round_trippers.go:451] Response Headers:
	I0526 21:25:02.888311  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:25:02.888316  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:25:02.888320  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:25:02.888324  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:25:02.888329  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:25:02 GMT
	I0526 21:25:02.888444  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-20210526212238-510955","namespace":"kube-system","uid":"ff663293-6f11-48e7-9409-1637114dc587","resourceVersion":"390","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"474c55dfb64741cc485e46b6bb9f2dc0","kubernetes.io/config.mirror":"474c55dfb64741cc485e46b6bb9f2dc0","kubernetes.io/config.seen":"2021-05-26T21:23:43.638987620Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con
fig.mirror":{},"f:kubernetes.io/config.seen":{},"f:kubernetes.io/config [truncated 7083 chars]
	I0526 21:25:02.888835  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:25:02.888852  527485 round_trippers.go:429] Request Headers:
	I0526 21:25:02.888874  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:25:02.888882  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:25:02.891010  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:25:02.891022  527485 round_trippers.go:451] Response Headers:
	I0526 21:25:02.891026  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:25:02.891029  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:25:02.891032  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:25:02.891036  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:25:02.891040  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:25:02 GMT
	I0526 21:25:02.891390  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:25:03.386295  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20210526212238-510955
	I0526 21:25:03.386313  527485 round_trippers.go:429] Request Headers:
	I0526 21:25:03.386318  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:25:03.386324  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:25:03.388600  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:25:03.388616  527485 round_trippers.go:451] Response Headers:
	I0526 21:25:03.388620  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:25:03.388625  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:25:03.388628  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:25:03.388631  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:25:03.388634  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:25:03 GMT
	I0526 21:25:03.388797  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-20210526212238-510955","namespace":"kube-system","uid":"ff663293-6f11-48e7-9409-1637114dc587","resourceVersion":"390","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"474c55dfb64741cc485e46b6bb9f2dc0","kubernetes.io/config.mirror":"474c55dfb64741cc485e46b6bb9f2dc0","kubernetes.io/config.seen":"2021-05-26T21:23:43.638987620Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con
fig.mirror":{},"f:kubernetes.io/config.seen":{},"f:kubernetes.io/config [truncated 7083 chars]
	I0526 21:25:03.389217  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:25:03.389238  527485 round_trippers.go:429] Request Headers:
	I0526 21:25:03.389245  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:25:03.389249  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:25:03.391648  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:25:03.391659  527485 round_trippers.go:451] Response Headers:
	I0526 21:25:03.391662  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:25:03.391665  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:25:03.391668  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:25:03.391671  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:25:03.391680  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:25:03 GMT
	I0526 21:25:03.392346  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:25:03.885913  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20210526212238-510955
	I0526 21:25:03.885952  527485 round_trippers.go:429] Request Headers:
	I0526 21:25:03.885964  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:25:03.885980  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:25:03.889239  527485 round_trippers.go:448] Response Status: 200 OK in 3 milliseconds
	I0526 21:25:03.889254  527485 round_trippers.go:451] Response Headers:
	I0526 21:25:03.889260  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:25:03.889264  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:25:03.889269  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:25:03.889274  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:25:03 GMT
	I0526 21:25:03.889278  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:25:03.889898  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-20210526212238-510955","namespace":"kube-system","uid":"ff663293-6f11-48e7-9409-1637114dc587","resourceVersion":"390","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"474c55dfb64741cc485e46b6bb9f2dc0","kubernetes.io/config.mirror":"474c55dfb64741cc485e46b6bb9f2dc0","kubernetes.io/config.seen":"2021-05-26T21:23:43.638987620Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con
fig.mirror":{},"f:kubernetes.io/config.seen":{},"f:kubernetes.io/config [truncated 7083 chars]
	I0526 21:25:03.890183  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:25:03.890196  527485 round_trippers.go:429] Request Headers:
	I0526 21:25:03.890200  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:25:03.890204  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:25:03.892796  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:25:03.892809  527485 round_trippers.go:451] Response Headers:
	I0526 21:25:03.892815  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:25:03.892820  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:25:03 GMT
	I0526 21:25:03.892824  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:25:03.892828  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:25:03.892833  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:25:03.893214  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:25:04.385953  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20210526212238-510955
	I0526 21:25:04.385974  527485 round_trippers.go:429] Request Headers:
	I0526 21:25:04.385980  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:25:04.385986  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:25:04.388914  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:25:04.388930  527485 round_trippers.go:451] Response Headers:
	I0526 21:25:04.388935  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:25:04.388939  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:25:04.388943  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:25:04.388946  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:25:04 GMT
	I0526 21:25:04.388949  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:25:04.389945  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-20210526212238-510955","namespace":"kube-system","uid":"ff663293-6f11-48e7-9409-1637114dc587","resourceVersion":"390","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"474c55dfb64741cc485e46b6bb9f2dc0","kubernetes.io/config.mirror":"474c55dfb64741cc485e46b6bb9f2dc0","kubernetes.io/config.seen":"2021-05-26T21:23:43.638987620Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con
fig.mirror":{},"f:kubernetes.io/config.seen":{},"f:kubernetes.io/config [truncated 7083 chars]
	I0526 21:25:04.390218  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:25:04.390229  527485 round_trippers.go:429] Request Headers:
	I0526 21:25:04.390234  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:25:04.390238  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:25:04.392034  527485 round_trippers.go:448] Response Status: 200 OK in 1 milliseconds
	I0526 21:25:04.392046  527485 round_trippers.go:451] Response Headers:
	I0526 21:25:04.392050  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:25:04.392053  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:25:04.392058  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:25:04.392061  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:25:04.392064  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:25:04 GMT
	I0526 21:25:04.392268  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:25:04.392559  527485 pod_ready.go:102] pod "kube-controller-manager-multinode-20210526212238-510955" in "kube-system" namespace has status "Ready":"False"
	I0526 21:25:04.886008  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20210526212238-510955
	I0526 21:25:04.886029  527485 round_trippers.go:429] Request Headers:
	I0526 21:25:04.886037  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:25:04.886042  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:25:04.888599  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:25:04.888618  527485 round_trippers.go:451] Response Headers:
	I0526 21:25:04.888624  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:25:04.888629  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:25:04.888634  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:25:04.888638  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:25:04.888642  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:25:04 GMT
	I0526 21:25:04.889177  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-20210526212238-510955","namespace":"kube-system","uid":"ff663293-6f11-48e7-9409-1637114dc587","resourceVersion":"390","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"474c55dfb64741cc485e46b6bb9f2dc0","kubernetes.io/config.mirror":"474c55dfb64741cc485e46b6bb9f2dc0","kubernetes.io/config.seen":"2021-05-26T21:23:43.638987620Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con
fig.mirror":{},"f:kubernetes.io/config.seen":{},"f:kubernetes.io/config [truncated 7083 chars]
	I0526 21:25:04.889477  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:25:04.889491  527485 round_trippers.go:429] Request Headers:
	I0526 21:25:04.889496  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:25:04.889500  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:25:04.892023  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:25:04.892037  527485 round_trippers.go:451] Response Headers:
	I0526 21:25:04.892041  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:25:04.892046  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:25:04.892050  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:25:04.892058  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:25:04.892062  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:25:04 GMT
	I0526 21:25:04.892142  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:25:05.386081  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20210526212238-510955
	I0526 21:25:05.386121  527485 round_trippers.go:429] Request Headers:
	I0526 21:25:05.386137  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:25:05.386148  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:25:05.388646  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:25:05.388658  527485 round_trippers.go:451] Response Headers:
	I0526 21:25:05.388662  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:25:05.388665  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:25:05 GMT
	I0526 21:25:05.388668  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:25:05.388672  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:25:05.388676  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:25:05.389151  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-20210526212238-510955","namespace":"kube-system","uid":"ff663293-6f11-48e7-9409-1637114dc587","resourceVersion":"390","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"474c55dfb64741cc485e46b6bb9f2dc0","kubernetes.io/config.mirror":"474c55dfb64741cc485e46b6bb9f2dc0","kubernetes.io/config.seen":"2021-05-26T21:23:43.638987620Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con
fig.mirror":{},"f:kubernetes.io/config.seen":{},"f:kubernetes.io/config [truncated 7083 chars]
	I0526 21:25:05.389448  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:25:05.389462  527485 round_trippers.go:429] Request Headers:
	I0526 21:25:05.389468  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:25:05.389473  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:25:05.391731  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:25:05.391744  527485 round_trippers.go:451] Response Headers:
	I0526 21:25:05.391750  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:25:05.391754  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:25:05.391758  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:25:05.391763  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:25:05.391776  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:25:05 GMT
	I0526 21:25:05.392249  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:25:05.886100  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20210526212238-510955
	I0526 21:25:05.886120  527485 round_trippers.go:429] Request Headers:
	I0526 21:25:05.886125  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:25:05.886128  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:25:05.889414  527485 round_trippers.go:448] Response Status: 200 OK in 3 milliseconds
	I0526 21:25:05.889429  527485 round_trippers.go:451] Response Headers:
	I0526 21:25:05.889433  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:25:05.889437  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:25:05.889442  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:25:05.889446  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:25:05 GMT
	I0526 21:25:05.889451  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:25:05.889933  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-20210526212238-510955","namespace":"kube-system","uid":"ff663293-6f11-48e7-9409-1637114dc587","resourceVersion":"390","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"474c55dfb64741cc485e46b6bb9f2dc0","kubernetes.io/config.mirror":"474c55dfb64741cc485e46b6bb9f2dc0","kubernetes.io/config.seen":"2021-05-26T21:23:43.638987620Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con
fig.mirror":{},"f:kubernetes.io/config.seen":{},"f:kubernetes.io/config [truncated 7083 chars]
	I0526 21:25:05.890197  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:25:05.890208  527485 round_trippers.go:429] Request Headers:
	I0526 21:25:05.890215  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:25:05.890218  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:25:05.892713  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:25:05.892724  527485 round_trippers.go:451] Response Headers:
	I0526 21:25:05.892729  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:25:05.892734  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:25:05.892738  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:25:05.892741  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:25:05.892744  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:25:05 GMT
	I0526 21:25:05.893120  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:25:06.385970  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20210526212238-510955
	I0526 21:25:06.385989  527485 round_trippers.go:429] Request Headers:
	I0526 21:25:06.385994  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:25:06.385998  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:25:06.388784  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:25:06.388805  527485 round_trippers.go:451] Response Headers:
	I0526 21:25:06.388809  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:25:06 GMT
	I0526 21:25:06.388819  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:25:06.388825  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:25:06.388830  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:25:06.388834  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:25:06.389178  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-20210526212238-510955","namespace":"kube-system","uid":"ff663293-6f11-48e7-9409-1637114dc587","resourceVersion":"390","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"474c55dfb64741cc485e46b6bb9f2dc0","kubernetes.io/config.mirror":"474c55dfb64741cc485e46b6bb9f2dc0","kubernetes.io/config.seen":"2021-05-26T21:23:43.638987620Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con
fig.mirror":{},"f:kubernetes.io/config.seen":{},"f:kubernetes.io/config [truncated 7083 chars]
	I0526 21:25:06.389554  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:25:06.389571  527485 round_trippers.go:429] Request Headers:
	I0526 21:25:06.389577  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:25:06.389583  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:25:06.392024  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:25:06.392043  527485 round_trippers.go:451] Response Headers:
	I0526 21:25:06.392048  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:25:06.392053  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:25:06.392057  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:25:06 GMT
	I0526 21:25:06.392062  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:25:06.392066  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:25:06.392258  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:25:06.886146  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20210526212238-510955
	I0526 21:25:06.886166  527485 round_trippers.go:429] Request Headers:
	I0526 21:25:06.886171  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:25:06.886175  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:25:06.889263  527485 round_trippers.go:448] Response Status: 200 OK in 3 milliseconds
	I0526 21:25:06.889277  527485 round_trippers.go:451] Response Headers:
	I0526 21:25:06.889281  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:25:06.889284  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:25:06 GMT
	I0526 21:25:06.889288  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:25:06.889291  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:25:06.889294  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:25:06.889456  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-20210526212238-510955","namespace":"kube-system","uid":"ff663293-6f11-48e7-9409-1637114dc587","resourceVersion":"390","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"474c55dfb64741cc485e46b6bb9f2dc0","kubernetes.io/config.mirror":"474c55dfb64741cc485e46b6bb9f2dc0","kubernetes.io/config.seen":"2021-05-26T21:23:43.638987620Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con
fig.mirror":{},"f:kubernetes.io/config.seen":{},"f:kubernetes.io/config [truncated 7083 chars]
	I0526 21:25:06.889829  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:25:06.889840  527485 round_trippers.go:429] Request Headers:
	I0526 21:25:06.889844  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:25:06.889849  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:25:06.892026  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:25:06.892038  527485 round_trippers.go:451] Response Headers:
	I0526 21:25:06.892043  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:25:06.892049  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:25:06.892054  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:25:06.892058  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:25:06.892062  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:25:06 GMT
	I0526 21:25:06.892188  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:25:06.892495  527485 pod_ready.go:102] pod "kube-controller-manager-multinode-20210526212238-510955" in "kube-system" namespace has status "Ready":"False"
	I0526 21:25:07.386079  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20210526212238-510955
	I0526 21:25:07.386099  527485 round_trippers.go:429] Request Headers:
	I0526 21:25:07.386104  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:25:07.386108  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:25:07.388338  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:25:07.388348  527485 round_trippers.go:451] Response Headers:
	I0526 21:25:07.388353  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:25:07.388357  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:25:07.388361  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:25:07.388366  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:25:07.388371  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:25:07 GMT
	I0526 21:25:07.388649  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-20210526212238-510955","namespace":"kube-system","uid":"ff663293-6f11-48e7-9409-1637114dc587","resourceVersion":"390","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"474c55dfb64741cc485e46b6bb9f2dc0","kubernetes.io/config.mirror":"474c55dfb64741cc485e46b6bb9f2dc0","kubernetes.io/config.seen":"2021-05-26T21:23:43.638987620Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con
fig.mirror":{},"f:kubernetes.io/config.seen":{},"f:kubernetes.io/config [truncated 7083 chars]
	I0526 21:25:07.389070  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:25:07.389089  527485 round_trippers.go:429] Request Headers:
	I0526 21:25:07.389095  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:25:07.389101  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:25:07.391381  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:25:07.391392  527485 round_trippers.go:451] Response Headers:
	I0526 21:25:07.391395  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:25:07.391398  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:25:07.391404  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:25:07.391407  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:25:07.391410  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:25:07 GMT
	I0526 21:25:07.391608  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:25:07.886386  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20210526212238-510955
	I0526 21:25:07.886404  527485 round_trippers.go:429] Request Headers:
	I0526 21:25:07.886409  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:25:07.886413  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:25:07.888909  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:25:07.888943  527485 round_trippers.go:451] Response Headers:
	I0526 21:25:07.888948  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:25:07.888953  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:25:07.888957  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:25:07.888962  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:25:07 GMT
	I0526 21:25:07.888966  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:25:07.889332  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-20210526212238-510955","namespace":"kube-system","uid":"ff663293-6f11-48e7-9409-1637114dc587","resourceVersion":"390","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"474c55dfb64741cc485e46b6bb9f2dc0","kubernetes.io/config.mirror":"474c55dfb64741cc485e46b6bb9f2dc0","kubernetes.io/config.seen":"2021-05-26T21:23:43.638987620Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con
fig.mirror":{},"f:kubernetes.io/config.seen":{},"f:kubernetes.io/config [truncated 7083 chars]
	I0526 21:25:07.889710  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:25:07.889729  527485 round_trippers.go:429] Request Headers:
	I0526 21:25:07.889735  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:25:07.889741  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:25:07.892177  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:25:07.892196  527485 round_trippers.go:451] Response Headers:
	I0526 21:25:07.892203  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:25:07.892208  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:25:07 GMT
	I0526 21:25:07.892214  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:25:07.892222  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:25:07.892227  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:25:07.892641  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:25:08.386366  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20210526212238-510955
	I0526 21:25:08.386386  527485 round_trippers.go:429] Request Headers:
	I0526 21:25:08.386391  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:25:08.386396  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:25:08.388585  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:25:08.388597  527485 round_trippers.go:451] Response Headers:
	I0526 21:25:08.388602  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:25:08.388607  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:25:08 GMT
	I0526 21:25:08.388611  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:25:08.388616  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:25:08.388620  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:25:08.388770  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-20210526212238-510955","namespace":"kube-system","uid":"ff663293-6f11-48e7-9409-1637114dc587","resourceVersion":"390","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"474c55dfb64741cc485e46b6bb9f2dc0","kubernetes.io/config.mirror":"474c55dfb64741cc485e46b6bb9f2dc0","kubernetes.io/config.seen":"2021-05-26T21:23:43.638987620Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con
fig.mirror":{},"f:kubernetes.io/config.seen":{},"f:kubernetes.io/config [truncated 7083 chars]
	I0526 21:25:08.389223  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:25:08.389247  527485 round_trippers.go:429] Request Headers:
	I0526 21:25:08.389255  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:25:08.389271  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:25:08.391393  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:25:08.391406  527485 round_trippers.go:451] Response Headers:
	I0526 21:25:08.391409  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:25:08.391413  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:25:08.391416  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:25:08.391418  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:25:08.391422  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:25:08 GMT
	I0526 21:25:08.391675  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:25:08.886010  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20210526212238-510955
	I0526 21:25:08.886030  527485 round_trippers.go:429] Request Headers:
	I0526 21:25:08.886037  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:25:08.886043  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:25:08.888255  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:25:08.888279  527485 round_trippers.go:451] Response Headers:
	I0526 21:25:08.888285  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:25:08.888290  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:25:08.888294  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:25:08.888298  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:25:08 GMT
	I0526 21:25:08.888302  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:25:08.888526  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-20210526212238-510955","namespace":"kube-system","uid":"ff663293-6f11-48e7-9409-1637114dc587","resourceVersion":"390","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"474c55dfb64741cc485e46b6bb9f2dc0","kubernetes.io/config.mirror":"474c55dfb64741cc485e46b6bb9f2dc0","kubernetes.io/config.seen":"2021-05-26T21:23:43.638987620Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con
fig.mirror":{},"f:kubernetes.io/config.seen":{},"f:kubernetes.io/config [truncated 7083 chars]
	I0526 21:25:08.888895  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:25:08.888909  527485 round_trippers.go:429] Request Headers:
	I0526 21:25:08.888914  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:25:08.888918  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:25:08.891198  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:25:08.891211  527485 round_trippers.go:451] Response Headers:
	I0526 21:25:08.891215  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:25:08.891219  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:25:08.891222  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:25:08 GMT
	I0526 21:25:08.891225  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:25:08.891228  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:25:08.891430  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:25:09.386434  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20210526212238-510955
	I0526 21:25:09.386464  527485 round_trippers.go:429] Request Headers:
	I0526 21:25:09.386470  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:25:09.386474  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:25:09.392555  527485 round_trippers.go:448] Response Status: 200 OK in 6 milliseconds
	I0526 21:25:09.392584  527485 round_trippers.go:451] Response Headers:
	I0526 21:25:09.392591  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:25:09.392596  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:25:09 GMT
	I0526 21:25:09.392601  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:25:09.392605  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:25:09.392610  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:25:09.392884  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-20210526212238-510955","namespace":"kube-system","uid":"ff663293-6f11-48e7-9409-1637114dc587","resourceVersion":"546","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"474c55dfb64741cc485e46b6bb9f2dc0","kubernetes.io/config.mirror":"474c55dfb64741cc485e46b6bb9f2dc0","kubernetes.io/config.seen":"2021-05-26T21:23:43.638987620Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:25:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con
fig.mirror":{},"f:kubernetes.io/config.seen":{},"f:kubernetes.io/config [truncated 6822 chars]
	I0526 21:25:09.393390  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:25:09.393410  527485 round_trippers.go:429] Request Headers:
	I0526 21:25:09.393417  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:25:09.393423  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:25:09.396417  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:25:09.396438  527485 round_trippers.go:451] Response Headers:
	I0526 21:25:09.396445  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:25:09.396451  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:25:09.396456  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:25:09.396464  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:25:09.396471  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:25:09 GMT
	I0526 21:25:09.397176  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:25:09.397504  527485 pod_ready.go:92] pod "kube-controller-manager-multinode-20210526212238-510955" in "kube-system" namespace has status "Ready":"True"
	I0526 21:25:09.397532  527485 pod_ready.go:81] duration metric: took 7.017118929s waiting for pod "kube-controller-manager-multinode-20210526212238-510955" in "kube-system" namespace to be "Ready" ...
	I0526 21:25:09.397550  527485 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qbl42" in "kube-system" namespace to be "Ready" ...
	I0526 21:25:09.397615  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qbl42
	I0526 21:25:09.397627  527485 round_trippers.go:429] Request Headers:
	I0526 21:25:09.397635  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:25:09.397642  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:25:09.399460  527485 round_trippers.go:448] Response Status: 200 OK in 1 milliseconds
	I0526 21:25:09.399478  527485 round_trippers.go:451] Response Headers:
	I0526 21:25:09.399483  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:25:09 GMT
	I0526 21:25:09.399489  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:25:09.399498  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:25:09.399503  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:25:09.399507  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:25:09.399661  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-qbl42","generateName":"kube-proxy-","namespace":"kube-system","uid":"950a915d-c5f0-4e6f-bc12-ee97013032f0","resourceVersion":"453","creationTimestamp":"2021-05-26T21:23:53Z","labels":{"controller-revision-hash":"b89db7f56","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"59f7a309-d89a-4050-8e82-fc8da888387f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"59f7a309-d89a-4050-8e82-fc8da888387f\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller"
:{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:affinity":{".":{ [truncated 5529 chars]
	I0526 21:25:09.399960  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:25:09.399974  527485 round_trippers.go:429] Request Headers:
	I0526 21:25:09.399980  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:25:09.399986  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:25:09.402671  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:25:09.402689  527485 round_trippers.go:451] Response Headers:
	I0526 21:25:09.402694  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:25:09 GMT
	I0526 21:25:09.402699  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:25:09.402703  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:25:09.402707  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:25:09.402711  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:25:09.402976  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:25:09.403265  527485 pod_ready.go:92] pod "kube-proxy-qbl42" in "kube-system" namespace has status "Ready":"True"
	I0526 21:25:09.403280  527485 pod_ready.go:81] duration metric: took 5.713239ms waiting for pod "kube-proxy-qbl42" in "kube-system" namespace to be "Ready" ...
	I0526 21:25:09.403289  527485 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-20210526212238-510955" in "kube-system" namespace to be "Ready" ...
	I0526 21:25:09.403335  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-20210526212238-510955
	I0526 21:25:09.403345  527485 round_trippers.go:429] Request Headers:
	I0526 21:25:09.403349  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:25:09.403353  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:25:09.404960  527485 round_trippers.go:448] Response Status: 200 OK in 1 milliseconds
	I0526 21:25:09.404975  527485 round_trippers.go:451] Response Headers:
	I0526 21:25:09.404981  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:25:09.404986  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:25:09.404990  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:25:09.404994  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:25:09.404998  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:25:09 GMT
	I0526 21:25:09.405340  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-20210526212238-510955","namespace":"kube-system","uid":"66bb91fe-7af2-400f-a477-fe2dc3428e83","resourceVersion":"344","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"6b4a0ee8b3d15a1c2e47c15d32e6eb0d","kubernetes.io/config.mirror":"6b4a0ee8b3d15a1c2e47c15d32e6eb0d","kubernetes.io/config.seen":"2021-05-26T21:23:43.638976446Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:
kubernetes.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:la [truncated 4795 chars]
	I0526 21:25:09.405641  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:25:09.405660  527485 round_trippers.go:429] Request Headers:
	I0526 21:25:09.405667  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:25:09.405673  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:25:09.407536  527485 round_trippers.go:448] Response Status: 200 OK in 1 milliseconds
	I0526 21:25:09.407552  527485 round_trippers.go:451] Response Headers:
	I0526 21:25:09.407558  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:25:09.407564  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:25:09.407572  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:25:09.407585  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:25:09.407590  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:25:09 GMT
	I0526 21:25:09.407679  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:25:09.908611  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-20210526212238-510955
	I0526 21:25:09.908654  527485 round_trippers.go:429] Request Headers:
	I0526 21:25:09.908667  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:25:09.908678  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:25:09.911382  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:25:09.911396  527485 round_trippers.go:451] Response Headers:
	I0526 21:25:09.911399  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:25:09 GMT
	I0526 21:25:09.911402  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:25:09.911405  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:25:09.911408  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:25:09.911411  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:25:09.911640  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-20210526212238-510955","namespace":"kube-system","uid":"66bb91fe-7af2-400f-a477-fe2dc3428e83","resourceVersion":"344","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"6b4a0ee8b3d15a1c2e47c15d32e6eb0d","kubernetes.io/config.mirror":"6b4a0ee8b3d15a1c2e47c15d32e6eb0d","kubernetes.io/config.seen":"2021-05-26T21:23:43.638976446Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:
kubernetes.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:la [truncated 4795 chars]
	I0526 21:25:09.911947  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:25:09.911965  527485 round_trippers.go:429] Request Headers:
	I0526 21:25:09.911972  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:25:09.911979  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:25:09.914076  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:25:09.914089  527485 round_trippers.go:451] Response Headers:
	I0526 21:25:09.914097  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:25:09.914101  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:25:09 GMT
	I0526 21:25:09.914104  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:25:09.914106  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:25:09.914115  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:25:09.914297  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:25:10.408088  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-20210526212238-510955
	I0526 21:25:10.408107  527485 round_trippers.go:429] Request Headers:
	I0526 21:25:10.408111  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:25:10.408115  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:25:10.410208  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:25:10.410228  527485 round_trippers.go:451] Response Headers:
	I0526 21:25:10.410234  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:25:10.410238  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:25:10.410243  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:25:10.410247  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:25:10 GMT
	I0526 21:25:10.410253  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:25:10.410415  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-20210526212238-510955","namespace":"kube-system","uid":"66bb91fe-7af2-400f-a477-fe2dc3428e83","resourceVersion":"547","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"6b4a0ee8b3d15a1c2e47c15d32e6eb0d","kubernetes.io/config.mirror":"6b4a0ee8b3d15a1c2e47c15d32e6eb0d","kubernetes.io/config.seen":"2021-05-26T21:23:43.638976446Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:25:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:
kubernetes.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:la [truncated 4552 chars]
	I0526 21:25:10.410665  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:25:10.410678  527485 round_trippers.go:429] Request Headers:
	I0526 21:25:10.410683  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:25:10.410687  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:25:10.412777  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:25:10.412794  527485 round_trippers.go:451] Response Headers:
	I0526 21:25:10.412803  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:25:10.412808  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:25:10.412812  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:25:10 GMT
	I0526 21:25:10.412817  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:25:10.412821  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:25:10.413168  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:25:10.413474  527485 pod_ready.go:92] pod "kube-scheduler-multinode-20210526212238-510955" in "kube-system" namespace has status "Ready":"True"
	I0526 21:25:10.413501  527485 pod_ready.go:81] duration metric: took 1.010202839s waiting for pod "kube-scheduler-multinode-20210526212238-510955" in "kube-system" namespace to be "Ready" ...
	I0526 21:25:10.413517  527485 pod_ready.go:38] duration metric: took 1m6.076583011s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0526 21:25:10.413541  527485 api_server.go:50] waiting for apiserver process to appear ...
	I0526 21:25:10.413561  527485 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0526 21:25:10.413618  527485 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0526 21:25:10.431796  527485 command_runner.go:124] > a0581c0e5409bf38a71c61d4cf776a7fcd3dc38c24a5a6a52165552afd2bc85c
	I0526 21:25:10.433015  527485 cri.go:76] found id: "a0581c0e5409bf38a71c61d4cf776a7fcd3dc38c24a5a6a52165552afd2bc85c"
	I0526 21:25:10.433031  527485 cri.go:76] found id: ""
	I0526 21:25:10.433039  527485 logs.go:270] 1 containers: [a0581c0e5409bf38a71c61d4cf776a7fcd3dc38c24a5a6a52165552afd2bc85c]
	I0526 21:25:10.433084  527485 ssh_runner.go:149] Run: which crictl
	I0526 21:25:10.437650  527485 command_runner.go:124] > /bin/crictl
	I0526 21:25:10.437679  527485 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0526 21:25:10.437721  527485 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=etcd
	I0526 21:25:10.456742  527485 command_runner.go:124] > c8538106e966bebc2d596c1688fdb11c413a37e122bd18c454b4db5bec7f55ad
	I0526 21:25:10.458507  527485 cri.go:76] found id: "c8538106e966bebc2d596c1688fdb11c413a37e122bd18c454b4db5bec7f55ad"
	I0526 21:25:10.458524  527485 cri.go:76] found id: ""
	I0526 21:25:10.458530  527485 logs.go:270] 1 containers: [c8538106e966bebc2d596c1688fdb11c413a37e122bd18c454b4db5bec7f55ad]
	I0526 21:25:10.458564  527485 ssh_runner.go:149] Run: which crictl
	I0526 21:25:10.462771  527485 command_runner.go:124] > /bin/crictl
	I0526 21:25:10.462801  527485 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0526 21:25:10.462837  527485 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=coredns
	I0526 21:25:10.481966  527485 command_runner.go:124] > a9593dff4428d4d0f5cc31832cefb42bf39b62a98d32d295dd9502a1ef6d307a
	I0526 21:25:10.482087  527485 cri.go:76] found id: "a9593dff4428d4d0f5cc31832cefb42bf39b62a98d32d295dd9502a1ef6d307a"
	I0526 21:25:10.482101  527485 cri.go:76] found id: ""
	I0526 21:25:10.482106  527485 logs.go:270] 1 containers: [a9593dff4428d4d0f5cc31832cefb42bf39b62a98d32d295dd9502a1ef6d307a]
	I0526 21:25:10.482140  527485 ssh_runner.go:149] Run: which crictl
	I0526 21:25:10.486730  527485 command_runner.go:124] > /bin/crictl
	I0526 21:25:10.486770  527485 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0526 21:25:10.486805  527485 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0526 21:25:10.505145  527485 command_runner.go:124] > e6bb9bee7539aa8f41eb665706dd077c17b8c28a54090cf4fffe439978194c08
	I0526 21:25:10.505170  527485 cri.go:76] found id: "e6bb9bee7539aa8f41eb665706dd077c17b8c28a54090cf4fffe439978194c08"
	I0526 21:25:10.505176  527485 cri.go:76] found id: ""
	I0526 21:25:10.505180  527485 logs.go:270] 1 containers: [e6bb9bee7539aa8f41eb665706dd077c17b8c28a54090cf4fffe439978194c08]
	I0526 21:25:10.505215  527485 ssh_runner.go:149] Run: which crictl
	I0526 21:25:10.508909  527485 command_runner.go:124] > /bin/crictl
	I0526 21:25:10.509381  527485 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0526 21:25:10.509430  527485 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0526 21:25:10.528716  527485 command_runner.go:124] > de6efc6fec4b20903efb849817a9e57e284841a9fc6ef9bd1b83e563316cdaa2
	I0526 21:25:10.528805  527485 cri.go:76] found id: "de6efc6fec4b20903efb849817a9e57e284841a9fc6ef9bd1b83e563316cdaa2"
	I0526 21:25:10.528825  527485 cri.go:76] found id: ""
	I0526 21:25:10.528832  527485 logs.go:270] 1 containers: [de6efc6fec4b20903efb849817a9e57e284841a9fc6ef9bd1b83e563316cdaa2]
	I0526 21:25:10.528889  527485 ssh_runner.go:149] Run: which crictl
	I0526 21:25:10.532768  527485 command_runner.go:124] > /bin/crictl
	I0526 21:25:10.533363  527485 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0526 21:25:10.533409  527485 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0526 21:25:10.551820  527485 cri.go:76] found id: ""
	I0526 21:25:10.551840  527485 logs.go:270] 0 containers: []
	W0526 21:25:10.551846  527485 logs.go:272] No container was found matching "kubernetes-dashboard"
	I0526 21:25:10.551853  527485 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0526 21:25:10.551889  527485 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0526 21:25:10.571635  527485 command_runner.go:124] > 5d3df8c94eaedafbc7bb1e79012cbc72bc2ec88687f4bfc6f10fff8c1eb3107d
	I0526 21:25:10.571702  527485 cri.go:76] found id: "5d3df8c94eaedafbc7bb1e79012cbc72bc2ec88687f4bfc6f10fff8c1eb3107d"
	I0526 21:25:10.571722  527485 cri.go:76] found id: ""
	I0526 21:25:10.571729  527485 logs.go:270] 1 containers: [5d3df8c94eaedafbc7bb1e79012cbc72bc2ec88687f4bfc6f10fff8c1eb3107d]
	I0526 21:25:10.571761  527485 ssh_runner.go:149] Run: which crictl
	I0526 21:25:10.575596  527485 command_runner.go:124] > /bin/crictl
	I0526 21:25:10.575627  527485 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0526 21:25:10.575664  527485 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0526 21:25:10.594214  527485 command_runner.go:124] > 2314e41b1b44395e7b5a7c32d6859941d4a5db601171f0dd18ddefb571d25f18
	I0526 21:25:10.594891  527485 cri.go:76] found id: "2314e41b1b44395e7b5a7c32d6859941d4a5db601171f0dd18ddefb571d25f18"
	I0526 21:25:10.594908  527485 cri.go:76] found id: ""
	I0526 21:25:10.594914  527485 logs.go:270] 1 containers: [2314e41b1b44395e7b5a7c32d6859941d4a5db601171f0dd18ddefb571d25f18]
	I0526 21:25:10.594943  527485 ssh_runner.go:149] Run: which crictl
	I0526 21:25:10.599658  527485 command_runner.go:124] > /bin/crictl
	I0526 21:25:10.599691  527485 logs.go:123] Gathering logs for dmesg ...
	I0526 21:25:10.599704  527485 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0526 21:25:10.610167  527485 command_runner.go:124] > [May26 21:22] You have booted with nomodeset. This means your GPU drivers are DISABLED
	I0526 21:25:10.610190  527485 command_runner.go:124] > [  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	I0526 21:25:10.610200  527485 command_runner.go:124] > [  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	I0526 21:25:10.610210  527485 command_runner.go:124] > [  +0.092301] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	I0526 21:25:10.610218  527485 command_runner.go:124] > [  +3.726361] Unstable clock detected, switching default tracing clock to "global"
	I0526 21:25:10.610225  527485 command_runner.go:124] >               If you want to keep using the local clock, then add:
	I0526 21:25:10.610231  527485 command_runner.go:124] >                 "trace_clock=local"
	I0526 21:25:10.610235  527485 command_runner.go:124] >               on the kernel command line
	I0526 21:25:10.610246  527485 command_runner.go:124] > [  +0.000018] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	I0526 21:25:10.610253  527485 command_runner.go:124] > [  +3.393840] systemd-fstab-generator[1161]: Ignoring "noauto" for root device
	I0526 21:25:10.610265  527485 command_runner.go:124] > [  +0.034647] systemd[1]: system-getty.slice: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	I0526 21:25:10.610274  527485 command_runner.go:124] > [  +0.000003] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	I0526 21:25:10.610286  527485 command_runner.go:124] > [  +0.775022] SELinux: unrecognized netlink message: protocol=0 nlmsg_type=106 sclass=netlink_route_socket pid=1723 comm=systemd-network
	I0526 21:25:10.610298  527485 command_runner.go:124] > [  +1.684954] vboxguest: loading out-of-tree module taints kernel.
	I0526 21:25:10.610313  527485 command_runner.go:124] > [  +0.006011] vboxguest: PCI device not found, probably running on physical hardware.
	I0526 21:25:10.610330  527485 command_runner.go:124] > [  +1.532510] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack.
	I0526 21:25:10.610340  527485 command_runner.go:124] > [May26 21:23] systemd-fstab-generator[2097]: Ignoring "noauto" for root device
	I0526 21:25:10.610354  527485 command_runner.go:124] > [  +0.282151] systemd-fstab-generator[2145]: Ignoring "noauto" for root device
	I0526 21:25:10.610367  527485 command_runner.go:124] > [  +9.202259] systemd-fstab-generator[2335]: Ignoring "noauto" for root device
	I0526 21:25:10.610379  527485 command_runner.go:124] > [ +16.373129] systemd-fstab-generator[2754]: Ignoring "noauto" for root device
	I0526 21:25:10.610388  527485 command_runner.go:124] > [ +16.598445] kauditd_printk_skb: 38 callbacks suppressed
	I0526 21:25:10.610400  527485 command_runner.go:124] > [May26 21:24] kauditd_printk_skb: 50 callbacks suppressed
	I0526 21:25:10.610409  527485 command_runner.go:124] > [ +45.152218] NFSD: Unable to end grace period: -110
	I0526 21:25:10.611372  527485 logs.go:123] Gathering logs for kube-apiserver [a0581c0e5409bf38a71c61d4cf776a7fcd3dc38c24a5a6a52165552afd2bc85c] ...
	I0526 21:25:10.611385  527485 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 a0581c0e5409bf38a71c61d4cf776a7fcd3dc38c24a5a6a52165552afd2bc85c"
	I0526 21:25:10.629290  527485 command_runner.go:124] ! Flag --insecure-port has been deprecated, This flag has no effect now and will be removed in v1.24.
	I0526 21:25:10.629469  527485 command_runner.go:124] ! I0526 21:23:29.805604       1 server.go:632] external host was not specified, using 192.168.39.229
	I0526 21:25:10.629586  527485 command_runner.go:124] ! I0526 21:23:29.806982       1 server.go:182] Version: v1.20.2
	I0526 21:25:10.629639  527485 command_runner.go:124] ! I0526 21:23:30.593640       1 shared_informer.go:240] Waiting for caches to sync for node_authorizer
	I0526 21:25:10.629742  527485 command_runner.go:124] ! I0526 21:23:30.598821       1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0526 21:25:10.630141  527485 command_runner.go:124] ! I0526 21:23:30.598945       1 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
	I0526 21:25:10.630210  527485 command_runner.go:124] ! I0526 21:23:30.600954       1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0526 21:25:10.630270  527485 command_runner.go:124] ! I0526 21:23:30.601309       1 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
	I0526 21:25:10.630413  527485 command_runner.go:124] ! I0526 21:23:30.616590       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.630534  527485 command_runner.go:124] ! I0526 21:23:30.617065       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.630581  527485 command_runner.go:124] ! I0526 21:23:30.995013       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.630779  527485 command_runner.go:124] ! I0526 21:23:30.995139       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.630887  527485 command_runner.go:124] ! I0526 21:23:31.030659       1 client.go:360] parsed scheme: "passthrough"
	I0526 21:25:10.631019  527485 command_runner.go:124] ! I0526 21:23:31.031231       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0526 21:25:10.631078  527485 command_runner.go:124] ! I0526 21:23:31.031324       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0526 21:25:10.631184  527485 command_runner.go:124] ! I0526 21:23:31.032369       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.631276  527485 command_runner.go:124] ! I0526 21:23:31.032725       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.631389  527485 command_runner.go:124] ! I0526 21:23:31.143094       1 instance.go:289] Using reconciler: lease
	I0526 21:25:10.631554  527485 command_runner.go:124] ! I0526 21:23:31.148814       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.631650  527485 command_runner.go:124] ! I0526 21:23:31.148936       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.631741  527485 command_runner.go:124] ! I0526 21:23:31.164327       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.632118  527485 command_runner.go:124] ! I0526 21:23:31.164627       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.632166  527485 command_runner.go:124] ! I0526 21:23:31.183831       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.632438  527485 command_runner.go:124] ! I0526 21:23:31.184185       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.632496  527485 command_runner.go:124] ! I0526 21:23:31.203621       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.632605  527485 command_runner.go:124] ! I0526 21:23:31.204140       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.632679  527485 command_runner.go:124] ! I0526 21:23:31.218608       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.633053  527485 command_runner.go:124] ! I0526 21:23:31.218929       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.633100  527485 command_runner.go:124] ! I0526 21:23:31.235670       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.633346  527485 command_runner.go:124] ! I0526 21:23:31.235780       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.633456  527485 command_runner.go:124] ! I0526 21:23:31.248767       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.633511  527485 command_runner.go:124] ! I0526 21:23:31.248973       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.633930  527485 command_runner.go:124] ! I0526 21:23:31.270717       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.633984  527485 command_runner.go:124] ! I0526 21:23:31.272045       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.634373  527485 command_runner.go:124] ! I0526 21:23:31.287807       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.634762  527485 command_runner.go:124] ! I0526 21:23:31.288158       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.634999  527485 command_runner.go:124] ! I0526 21:23:31.302175       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.635061  527485 command_runner.go:124] ! I0526 21:23:31.302294       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.635164  527485 command_runner.go:124] ! I0526 21:23:31.318788       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.635294  527485 command_runner.go:124] ! I0526 21:23:31.318898       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.635376  527485 command_runner.go:124] ! I0526 21:23:31.340681       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.635448  527485 command_runner.go:124] ! I0526 21:23:31.341103       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.635619  527485 command_runner.go:124] ! I0526 21:23:31.364875       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.635675  527485 command_runner.go:124] ! I0526 21:23:31.365260       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.636120  527485 command_runner.go:124] ! I0526 21:23:31.375229       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.636406  527485 command_runner.go:124] ! I0526 21:23:31.375353       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.636562  527485 command_runner.go:124] ! I0526 21:23:31.384385       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.636629  527485 command_runner.go:124] ! I0526 21:23:31.384585       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.636713  527485 command_runner.go:124] ! I0526 21:23:31.392770       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.637831  527485 command_runner.go:124] ! I0526 21:23:31.392939       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.637847  527485 command_runner.go:124] ! I0526 21:23:31.406398       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.637861  527485 command_runner.go:124] ! I0526 21:23:31.406589       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.637871  527485 command_runner.go:124] ! I0526 21:23:31.421828       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.637892  527485 command_runner.go:124] ! I0526 21:23:31.422392       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.637906  527485 command_runner.go:124] ! I0526 21:23:31.434772       1 rest.go:131] the default service ipfamily for this cluster is: IPv4
	I0526 21:25:10.637920  527485 command_runner.go:124] ! I0526 21:23:31.530123       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.637939  527485 command_runner.go:124] ! I0526 21:23:31.530234       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.637953  527485 command_runner.go:124] ! I0526 21:23:31.542917       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.637969  527485 command_runner.go:124] ! I0526 21:23:31.543258       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.637981  527485 command_runner.go:124] ! I0526 21:23:31.558871       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.637997  527485 command_runner.go:124] ! I0526 21:23:31.558975       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.638010  527485 command_runner.go:124] ! I0526 21:23:31.578311       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.638028  527485 command_runner.go:124] ! I0526 21:23:31.578428       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.638042  527485 command_runner.go:124] ! I0526 21:23:31.579212       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.638061  527485 command_runner.go:124] ! I0526 21:23:31.579406       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.638074  527485 command_runner.go:124] ! I0526 21:23:31.593279       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.638092  527485 command_runner.go:124] ! I0526 21:23:31.593392       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.638105  527485 command_runner.go:124] ! I0526 21:23:31.609260       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.638121  527485 command_runner.go:124] ! I0526 21:23:31.609368       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.638134  527485 command_runner.go:124] ! I0526 21:23:31.626851       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.638150  527485 command_runner.go:124] ! I0526 21:23:31.626960       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.638162  527485 command_runner.go:124] ! I0526 21:23:31.653023       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.638196  527485 command_runner.go:124] ! I0526 21:23:31.653138       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.638208  527485 command_runner.go:124] ! I0526 21:23:31.662951       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.638223  527485 command_runner.go:124] ! I0526 21:23:31.663349       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.638236  527485 command_runner.go:124] ! I0526 21:23:31.683106       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.638252  527485 command_runner.go:124] ! I0526 21:23:31.684613       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.638264  527485 command_runner.go:124] ! I0526 21:23:31.700741       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.638283  527485 command_runner.go:124] ! I0526 21:23:31.701266       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.638298  527485 command_runner.go:124] ! I0526 21:23:31.722045       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.638317  527485 command_runner.go:124] ! I0526 21:23:31.722235       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.638332  527485 command_runner.go:124] ! I0526 21:23:31.736295       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.638347  527485 command_runner.go:124] ! I0526 21:23:31.737071       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.638357  527485 command_runner.go:124] ! I0526 21:23:31.751086       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.638366  527485 command_runner.go:124] ! I0526 21:23:31.751202       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.638375  527485 command_runner.go:124] ! I0526 21:23:31.767941       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.638384  527485 command_runner.go:124] ! I0526 21:23:31.768045       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.638390  527485 command_runner.go:124] ! I0526 21:23:31.784917       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.638400  527485 command_runner.go:124] ! I0526 21:23:31.785029       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.638409  527485 command_runner.go:124] ! I0526 21:23:31.802204       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.638418  527485 command_runner.go:124] ! I0526 21:23:31.802314       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.638426  527485 command_runner.go:124] ! I0526 21:23:31.817427       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.638440  527485 command_runner.go:124] ! I0526 21:23:31.817616       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.638454  527485 command_runner.go:124] ! I0526 21:23:31.837841       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.638468  527485 command_runner.go:124] ! I0526 21:23:31.837939       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.638478  527485 command_runner.go:124] ! I0526 21:23:31.860217       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.638493  527485 command_runner.go:124] ! I0526 21:23:31.861221       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.638507  527485 command_runner.go:124] ! I0526 21:23:31.871254       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.638523  527485 command_runner.go:124] ! I0526 21:23:31.872836       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.638534  527485 command_runner.go:124] ! I0526 21:23:31.884052       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.638551  527485 command_runner.go:124] ! I0526 21:23:31.884160       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.638564  527485 command_runner.go:124] ! I0526 21:23:31.898818       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.638578  527485 command_runner.go:124] ! I0526 21:23:31.898925       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.638587  527485 command_runner.go:124] ! I0526 21:23:31.913046       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.638597  527485 command_runner.go:124] ! I0526 21:23:31.913149       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.638605  527485 command_runner.go:124] ! I0526 21:23:31.925884       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.638614  527485 command_runner.go:124] ! I0526 21:23:31.925994       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.638623  527485 command_runner.go:124] ! I0526 21:23:31.939143       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.638632  527485 command_runner.go:124] ! I0526 21:23:31.939253       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.638640  527485 command_runner.go:124] ! I0526 21:23:31.954393       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.638650  527485 command_runner.go:124] ! I0526 21:23:31.956005       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.638658  527485 command_runner.go:124] ! I0526 21:23:31.964255       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.638670  527485 command_runner.go:124] ! I0526 21:23:31.964369       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.638679  527485 command_runner.go:124] ! I0526 21:23:31.980824       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.638688  527485 command_runner.go:124] ! I0526 21:23:31.980931       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.638697  527485 command_runner.go:124] ! I0526 21:23:31.998875       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.638706  527485 command_runner.go:124] ! I0526 21:23:31.998978       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.638717  527485 command_runner.go:124] ! I0526 21:23:32.014057       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.638726  527485 command_runner.go:124] ! I0526 21:23:32.014169       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.638736  527485 command_runner.go:124] ! I0526 21:23:32.027301       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.638746  527485 command_runner.go:124] ! I0526 21:23:32.027633       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.638755  527485 command_runner.go:124] ! I0526 21:23:32.046160       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.638765  527485 command_runner.go:124] ! I0526 21:23:32.046890       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.638777  527485 command_runner.go:124] ! I0526 21:23:32.068538       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.638786  527485 command_runner.go:124] ! I0526 21:23:32.069814       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.638795  527485 command_runner.go:124] ! I0526 21:23:32.087119       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.638804  527485 command_runner.go:124] ! I0526 21:23:32.087547       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.638813  527485 command_runner.go:124] ! I0526 21:23:32.097832       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.638822  527485 command_runner.go:124] ! I0526 21:23:32.097940       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.638831  527485 command_runner.go:124] ! I0526 21:23:32.107249       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.638841  527485 command_runner.go:124] ! I0526 21:23:32.107932       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.638849  527485 command_runner.go:124] ! I0526 21:23:32.119796       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.638859  527485 command_runner.go:124] ! I0526 21:23:32.119897       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.638868  527485 command_runner.go:124] ! I0526 21:23:32.128209       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.638877  527485 command_runner.go:124] ! I0526 21:23:32.128321       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.638885  527485 command_runner.go:124] ! I0526 21:23:32.138008       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.638895  527485 command_runner.go:124] ! I0526 21:23:32.138111       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.638903  527485 command_runner.go:124] ! I0526 21:23:32.160727       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.638912  527485 command_runner.go:124] ! I0526 21:23:32.160833       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.638922  527485 command_runner.go:124] ! I0526 21:23:32.186843       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.638932  527485 command_runner.go:124] ! I0526 21:23:32.186949       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.638940  527485 command_runner.go:124] ! I0526 21:23:32.198121       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.638950  527485 command_runner.go:124] ! I0526 21:23:32.198232       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.638958  527485 command_runner.go:124] ! I0526 21:23:32.206015       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.638968  527485 command_runner.go:124] ! I0526 21:23:32.206127       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.638976  527485 command_runner.go:124] ! I0526 21:23:32.222761       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.638986  527485 command_runner.go:124] ! I0526 21:23:32.223204       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.638994  527485 command_runner.go:124] ! I0526 21:23:32.232528       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.639003  527485 command_runner.go:124] ! I0526 21:23:32.232629       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.639013  527485 command_runner.go:124] ! I0526 21:23:32.245897       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.639023  527485 command_runner.go:124] ! I0526 21:23:32.246007       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.639033  527485 command_runner.go:124] ! I0526 21:23:32.263847       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.639057  527485 command_runner.go:124] ! I0526 21:23:32.263950       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.639068  527485 command_runner.go:124] ! I0526 21:23:32.275996       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.639078  527485 command_runner.go:124] ! I0526 21:23:32.276100       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.639087  527485 command_runner.go:124] ! I0526 21:23:32.286992       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.639098  527485 command_runner.go:124] ! I0526 21:23:32.288760       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.639106  527485 command_runner.go:124] ! I0526 21:23:32.300558       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.639116  527485 command_runner.go:124] ! I0526 21:23:32.300656       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.639126  527485 command_runner.go:124] ! W0526 21:23:32.466350       1 genericapiserver.go:419] Skipping API batch/v2alpha1 because it has no resources.
	I0526 21:25:10.639135  527485 command_runner.go:124] ! W0526 21:23:32.475974       1 genericapiserver.go:419] Skipping API discovery.k8s.io/v1alpha1 because it has no resources.
	I0526 21:25:10.639146  527485 command_runner.go:124] ! W0526 21:23:32.486620       1 genericapiserver.go:419] Skipping API node.k8s.io/v1alpha1 because it has no resources.
	I0526 21:25:10.639156  527485 command_runner.go:124] ! W0526 21:23:32.495038       1 genericapiserver.go:419] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
	I0526 21:25:10.639167  527485 command_runner.go:124] ! W0526 21:23:32.498634       1 genericapiserver.go:419] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
	I0526 21:25:10.639176  527485 command_runner.go:124] ! W0526 21:23:32.503834       1 genericapiserver.go:419] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
	I0526 21:25:10.639186  527485 command_runner.go:124] ! W0526 21:23:32.506839       1 genericapiserver.go:419] Skipping API flowcontrol.apiserver.k8s.io/v1alpha1 because it has no resources.
	I0526 21:25:10.639210  527485 command_runner.go:124] ! W0526 21:23:32.511920       1 genericapiserver.go:419] Skipping API apps/v1beta2 because it has no resources.
	I0526 21:25:10.639221  527485 command_runner.go:124] ! W0526 21:23:32.512155       1 genericapiserver.go:419] Skipping API apps/v1beta1 because it has no resources.
	I0526 21:25:10.639240  527485 command_runner.go:124] ! I0526 21:23:32.520325       1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0526 21:25:10.639260  527485 command_runner.go:124] ! I0526 21:23:32.520699       1 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
	I0526 21:25:10.639270  527485 command_runner.go:124] ! I0526 21:23:32.522294       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.639281  527485 command_runner.go:124] ! I0526 21:23:32.522675       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.639289  527485 command_runner.go:124] ! I0526 21:23:32.531035       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:10.639298  527485 command_runner.go:124] ! I0526 21:23:32.531144       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:10.639309  527485 command_runner.go:124] ! I0526 21:23:34.690784       1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
	I0526 21:25:10.639318  527485 command_runner.go:124] ! I0526 21:23:34.691285       1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	I0526 21:25:10.639331  527485 command_runner.go:124] ! I0526 21:23:34.692130       1 dynamic_serving_content.go:130] Starting serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key
	I0526 21:25:10.639342  527485 command_runner.go:124] ! I0526 21:23:34.692740       1 secure_serving.go:197] Serving securely on [::]:8443
	I0526 21:25:10.639352  527485 command_runner.go:124] ! I0526 21:23:34.693343       1 apf_controller.go:261] Starting API Priority and Fairness config controller
	I0526 21:25:10.639360  527485 command_runner.go:124] ! I0526 21:23:34.693677       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I0526 21:25:10.639369  527485 command_runner.go:124] ! I0526 21:23:34.694744       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
	I0526 21:25:10.639380  527485 command_runner.go:124] ! I0526 21:23:34.694836       1 shared_informer.go:240] Waiting for caches to sync for cluster_authentication_trust_controller
	I0526 21:25:10.639388  527485 command_runner.go:124] ! I0526 21:23:34.694880       1 available_controller.go:475] Starting AvailableConditionController
	I0526 21:25:10.639397  527485 command_runner.go:124] ! I0526 21:23:34.694885       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
	I0526 21:25:10.639408  527485 command_runner.go:124] ! I0526 21:23:34.694904       1 autoregister_controller.go:141] Starting autoregister controller
	I0526 21:25:10.639419  527485 command_runner.go:124] ! I0526 21:23:34.694908       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0526 21:25:10.639428  527485 command_runner.go:124] ! I0526 21:23:34.696887       1 apiservice_controller.go:97] Starting APIServiceRegistrationController
	I0526 21:25:10.639437  527485 command_runner.go:124] ! I0526 21:23:34.697053       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I0526 21:25:10.639455  527485 command_runner.go:124] ! I0526 21:23:34.697670       1 dynamic_serving_content.go:130] Starting aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key
	I0526 21:25:10.639464  527485 command_runner.go:124] ! I0526 21:23:34.697935       1 controller.go:83] Starting OpenAPI AggregationController
	I0526 21:25:10.639472  527485 command_runner.go:124] ! I0526 21:23:34.698627       1 customresource_discovery_controller.go:209] Starting DiscoveryController
	I0526 21:25:10.639482  527485 command_runner.go:124] ! I0526 21:23:34.705120       1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	I0526 21:25:10.639492  527485 command_runner.go:124] ! I0526 21:23:34.705289       1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
	I0526 21:25:10.639502  527485 command_runner.go:124] ! I0526 21:23:34.706119       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0526 21:25:10.639511  527485 command_runner.go:124] ! I0526 21:23:34.706246       1 shared_informer.go:240] Waiting for caches to sync for crd-autoregister
	I0526 21:25:10.639527  527485 command_runner.go:124] ! E0526 21:23:34.733148       1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.39.229, ResourceVersion: 0, AdditionalErrorMsg: 
	I0526 21:25:10.639535  527485 command_runner.go:124] ! I0526 21:23:34.762565       1 controller.go:86] Starting OpenAPI controller
	I0526 21:25:10.639543  527485 command_runner.go:124] ! I0526 21:23:34.762983       1 naming_controller.go:291] Starting NamingConditionController
	I0526 21:25:10.639553  527485 command_runner.go:124] ! I0526 21:23:34.763230       1 establishing_controller.go:76] Starting EstablishingController
	I0526 21:25:10.639561  527485 command_runner.go:124] ! I0526 21:23:34.763815       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0526 21:25:10.639572  527485 command_runner.go:124] ! I0526 21:23:34.764676       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0526 21:25:10.639581  527485 command_runner.go:124] ! I0526 21:23:34.765003       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0526 21:25:10.639588  527485 command_runner.go:124] ! I0526 21:23:34.894833       1 shared_informer.go:247] Caches are synced for node_authorizer 
	I0526 21:25:10.639597  527485 command_runner.go:124] ! I0526 21:23:34.895159       1 cache.go:39] Caches are synced for autoregister controller
	I0526 21:25:10.639605  527485 command_runner.go:124] ! I0526 21:23:34.895543       1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller 
	I0526 21:25:10.639615  527485 command_runner.go:124] ! I0526 21:23:34.895893       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0526 21:25:10.639624  527485 command_runner.go:124] ! I0526 21:23:34.897085       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0526 21:25:10.639632  527485 command_runner.go:124] ! I0526 21:23:34.899871       1 apf_controller.go:266] Running API Priority and Fairness config worker
	I0526 21:25:10.639640  527485 command_runner.go:124] ! I0526 21:23:34.907242       1 shared_informer.go:247] Caches are synced for crd-autoregister 
	I0526 21:25:10.639647  527485 command_runner.go:124] ! I0526 21:23:35.022751       1 controller.go:609] quota admission added evaluator for: namespaces
	I0526 21:25:10.639660  527485 command_runner.go:124] ! I0526 21:23:35.690855       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0526 21:25:10.639673  527485 command_runner.go:124] ! I0526 21:23:35.691097       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0526 21:25:10.639685  527485 command_runner.go:124] ! I0526 21:23:35.708402       1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000
	I0526 21:25:10.639694  527485 command_runner.go:124] ! I0526 21:23:35.726885       1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
	I0526 21:25:10.639704  527485 command_runner.go:124] ! I0526 21:23:35.727088       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I0526 21:25:10.639712  527485 command_runner.go:124] ! I0526 21:23:36.334571       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0526 21:25:10.639721  527485 command_runner.go:124] ! I0526 21:23:36.389004       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0526 21:25:10.639729  527485 command_runner.go:124] ! W0526 21:23:36.485873       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.39.229]
	I0526 21:25:10.639738  527485 command_runner.go:124] ! I0526 21:23:36.487435       1 controller.go:609] quota admission added evaluator for: endpoints
	I0526 21:25:10.639747  527485 command_runner.go:124] ! I0526 21:23:36.499209       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0526 21:25:10.639756  527485 command_runner.go:124] ! I0526 21:23:37.294654       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I0526 21:25:10.639763  527485 command_runner.go:124] ! I0526 21:23:38.382157       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I0526 21:25:10.639777  527485 command_runner.go:124] ! I0526 21:23:38.454712       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I0526 21:25:10.639788  527485 command_runner.go:124] ! I0526 21:23:43.955877       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I0526 21:25:10.639796  527485 command_runner.go:124] ! I0526 21:23:53.285833       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I0526 21:25:10.639806  527485 command_runner.go:124] ! I0526 21:23:53.338274       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I0526 21:25:10.639813  527485 command_runner.go:124] ! I0526 21:24:01.973387       1 client.go:360] parsed scheme: "passthrough"
	I0526 21:25:10.639823  527485 command_runner.go:124] ! I0526 21:24:01.973608       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0526 21:25:10.639833  527485 command_runner.go:124] ! I0526 21:24:01.973627       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0526 21:25:10.639839  527485 command_runner.go:124] ! I0526 21:24:43.497572       1 client.go:360] parsed scheme: "passthrough"
	I0526 21:25:10.639850  527485 command_runner.go:124] ! I0526 21:24:43.497775       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0526 21:25:10.639859  527485 command_runner.go:124] ! I0526 21:24:43.498072       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0526 21:25:10.650003  527485 logs.go:123] Gathering logs for etcd [c8538106e966bebc2d596c1688fdb11c413a37e122bd18c454b4db5bec7f55ad] ...
	I0526 21:25:10.650018  527485 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 c8538106e966bebc2d596c1688fdb11c413a37e122bd18c454b4db5bec7f55ad"
	I0526 21:25:10.671546  527485 command_runner.go:124] ! [WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead
	I0526 21:25:10.671564  527485 command_runner.go:124] ! 2021-05-26 21:23:30.145280 I | etcdmain: etcd Version: 3.4.13
	I0526 21:25:10.671571  527485 command_runner.go:124] ! 2021-05-26 21:23:30.146007 I | etcdmain: Git SHA: ae9734ed2
	I0526 21:25:10.671578  527485 command_runner.go:124] ! 2021-05-26 21:23:30.146359 I | etcdmain: Go Version: go1.12.17
	I0526 21:25:10.671590  527485 command_runner.go:124] ! 2021-05-26 21:23:30.146935 I | etcdmain: Go OS/Arch: linux/amd64
	I0526 21:25:10.671599  527485 command_runner.go:124] ! 2021-05-26 21:23:30.147549 I | etcdmain: setting maximum number of CPUs to 2, total number of available CPUs is 2
	I0526 21:25:10.671608  527485 command_runner.go:124] ! [WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead
	I0526 21:25:10.671622  527485 command_runner.go:124] ! 2021-05-26 21:23:30.148927 I | embed: peerTLS: cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	I0526 21:25:10.671632  527485 command_runner.go:124] ! 2021-05-26 21:23:30.159191 I | embed: name = multinode-20210526212238-510955
	I0526 21:25:10.671639  527485 command_runner.go:124] ! 2021-05-26 21:23:30.159781 I | embed: data dir = /var/lib/minikube/etcd
	I0526 21:25:10.671646  527485 command_runner.go:124] ! 2021-05-26 21:23:30.161368 I | embed: member dir = /var/lib/minikube/etcd/member
	I0526 21:25:10.671657  527485 command_runner.go:124] ! 2021-05-26 21:23:30.161781 I | embed: heartbeat = 100ms
	I0526 21:25:10.671665  527485 command_runner.go:124] ! 2021-05-26 21:23:30.162024 I | embed: election = 1000ms
	I0526 21:25:10.671671  527485 command_runner.go:124] ! 2021-05-26 21:23:30.162419 I | embed: snapshot count = 10000
	I0526 21:25:10.671680  527485 command_runner.go:124] ! 2021-05-26 21:23:30.162834 I | embed: advertise client URLs = https://192.168.39.229:2379
	I0526 21:25:10.671688  527485 command_runner.go:124] ! 2021-05-26 21:23:30.186657 I | etcdserver: starting member b8647f2870156d71 in cluster 2bfbf13ce68722b
	I0526 21:25:10.671695  527485 command_runner.go:124] ! raft2021/05/26 21:23:30 INFO: b8647f2870156d71 switched to configuration voters=()
	I0526 21:25:10.671702  527485 command_runner.go:124] ! raft2021/05/26 21:23:30 INFO: b8647f2870156d71 became follower at term 0
	I0526 21:25:10.671712  527485 command_runner.go:124] ! raft2021/05/26 21:23:30 INFO: newRaft b8647f2870156d71 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	I0526 21:25:10.671718  527485 command_runner.go:124] ! raft2021/05/26 21:23:30 INFO: b8647f2870156d71 became follower at term 1
	I0526 21:25:10.671726  527485 command_runner.go:124] ! raft2021/05/26 21:23:30 INFO: b8647f2870156d71 switched to configuration voters=(13286884612305677681)
	I0526 21:25:10.671735  527485 command_runner.go:124] ! 2021-05-26 21:23:30.205555 W | auth: simple token is not cryptographically signed
	I0526 21:25:10.671745  527485 command_runner.go:124] ! 2021-05-26 21:23:30.234208 I | etcdserver: starting server... [version: 3.4.13, cluster version: to_be_decided]
	I0526 21:25:10.671764  527485 command_runner.go:124] ! 2021-05-26 21:23:30.243414 I | etcdserver: b8647f2870156d71 as single-node; fast-forwarding 9 ticks (election ticks 10)
	I0526 21:25:10.671777  527485 command_runner.go:124] ! raft2021/05/26 21:23:30 INFO: b8647f2870156d71 switched to configuration voters=(13286884612305677681)
	I0526 21:25:10.671789  527485 command_runner.go:124] ! 2021-05-26 21:23:30.255082 I | etcdserver/membership: added member b8647f2870156d71 [https://192.168.39.229:2380] to cluster 2bfbf13ce68722b
	I0526 21:25:10.671802  527485 command_runner.go:124] ! 2021-05-26 21:23:30.261097 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	I0526 21:25:10.671813  527485 command_runner.go:124] ! 2021-05-26 21:23:30.264526 I | embed: listening for peers on 192.168.39.229:2380
	I0526 21:25:10.671822  527485 command_runner.go:124] ! 2021-05-26 21:23:30.264701 I | embed: listening for metrics on http://127.0.0.1:2381
	I0526 21:25:10.671833  527485 command_runner.go:124] ! raft2021/05/26 21:23:30 INFO: b8647f2870156d71 is starting a new election at term 1
	I0526 21:25:10.671849  527485 command_runner.go:124] ! raft2021/05/26 21:23:30 INFO: b8647f2870156d71 became candidate at term 2
	I0526 21:25:10.671863  527485 command_runner.go:124] ! raft2021/05/26 21:23:30 INFO: b8647f2870156d71 received MsgVoteResp from b8647f2870156d71 at term 2
	I0526 21:25:10.671873  527485 command_runner.go:124] ! raft2021/05/26 21:23:30 INFO: b8647f2870156d71 became leader at term 2
	I0526 21:25:10.671880  527485 command_runner.go:124] ! raft2021/05/26 21:23:30 INFO: raft.node: b8647f2870156d71 elected leader b8647f2870156d71 at term 2
	I0526 21:25:10.671890  527485 command_runner.go:124] ! 2021-05-26 21:23:30.893688 I | etcdserver: setting up the initial cluster version to 3.4
	I0526 21:25:10.671898  527485 command_runner.go:124] ! 2021-05-26 21:23:30.897562 I | embed: ready to serve client requests
	I0526 21:25:10.671911  527485 command_runner.go:124] ! 2021-05-26 21:23:30.897893 I | etcdserver: published {Name:multinode-20210526212238-510955 ClientURLs:[https://192.168.39.229:2379]} to cluster 2bfbf13ce68722b
	I0526 21:25:10.671924  527485 command_runner.go:124] ! 2021-05-26 21:23:30.898097 I | embed: ready to serve client requests
	I0526 21:25:10.671937  527485 command_runner.go:124] ! 2021-05-26 21:23:30.904911 I | embed: serving client requests on 127.0.0.1:2379
	I0526 21:25:10.671951  527485 command_runner.go:124] ! 2021-05-26 21:23:30.925406 I | embed: serving client requests on 192.168.39.229:2379
	I0526 21:25:10.671961  527485 command_runner.go:124] ! 2021-05-26 21:23:30.930764 N | etcdserver/membership: set the initial cluster version to 3.4
	I0526 21:25:10.671969  527485 command_runner.go:124] ! 2021-05-26 21:23:30.973015 I | etcdserver/api: enabled capabilities for version 3.4
	I0526 21:25:10.671984  527485 command_runner.go:124] ! 2021-05-26 21:23:35.005110 W | etcdserver: read-only range request "key:\"/registry/ranges/servicenodeports\" " with result "range_response_count:0 size:4" took too long (158.136927ms) to execute
	I0526 21:25:10.672004  527485 command_runner.go:124] ! 2021-05-26 21:23:35.008540 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/etcd-multinode-20210526212238-510955\" " with result "range_response_count:0 size:4" took too long (159.3133ms) to execute
	I0526 21:25:10.672026  527485 command_runner.go:124] ! 2021-05-26 21:23:35.012635 W | etcdserver: read-only range request "key:\"/registry/namespaces/kube-system\" " with result "range_response_count:0 size:4" took too long (107.936302ms) to execute
	I0526 21:25:10.672068  527485 command_runner.go:124] ! 2021-05-26 21:23:35.013064 W | etcdserver: read-only range request "key:\"/registry/csinodes/multinode-20210526212238-510955\" " with result "range_response_count:0 size:4" took too long (148.811077ms) to execute
	I0526 21:25:10.672080  527485 command_runner.go:124] ! 2021-05-26 21:23:35.013577 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:4" took too long (157.477156ms) to execute
	I0526 21:25:10.672094  527485 command_runner.go:124] ! 2021-05-26 21:23:48.034379 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	I0526 21:25:10.672107  527485 command_runner.go:124] ! 2021-05-26 21:23:50.916831 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	I0526 21:25:10.672121  527485 command_runner.go:124] ! 2021-05-26 21:24:00.917857 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	I0526 21:25:10.672135  527485 command_runner.go:124] ! 2021-05-26 21:24:10.918220 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	I0526 21:25:10.672148  527485 command_runner.go:124] ! 2021-05-26 21:24:20.917896 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	I0526 21:25:10.672160  527485 command_runner.go:124] ! 2021-05-26 21:24:30.916918 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	I0526 21:25:10.672169  527485 command_runner.go:124] ! 2021-05-26 21:24:40.917190 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	I0526 21:25:10.672185  527485 command_runner.go:124] ! 2021-05-26 21:24:50.917237 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	I0526 21:25:10.672202  527485 command_runner.go:124] ! 2021-05-26 21:25:00.916673 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	I0526 21:25:10.675785  527485 logs.go:123] Gathering logs for coredns [a9593dff4428d4d0f5cc31832cefb42bf39b62a98d32d295dd9502a1ef6d307a] ...
	I0526 21:25:10.675801  527485 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 a9593dff4428d4d0f5cc31832cefb42bf39b62a98d32d295dd9502a1ef6d307a"
	I0526 21:25:10.692545  527485 command_runner.go:124] > .:53
	I0526 21:25:10.692677  527485 command_runner.go:124] > [INFO] plugin/reload: Running configuration MD5 = 8f51b271a18f2ce6fcaee5f1cfda3ed0
	I0526 21:25:10.693108  527485 command_runner.go:124] > CoreDNS-1.7.0
	I0526 21:25:10.693188  527485 command_runner.go:124] > linux/amd64, go1.14.4, f59c03d
	I0526 21:25:10.694231  527485 logs.go:123] Gathering logs for kube-scheduler [e6bb9bee7539aa8f41eb665706dd077c17b8c28a54090cf4fffe439978194c08] ...
	I0526 21:25:10.694248  527485 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 e6bb9bee7539aa8f41eb665706dd077c17b8c28a54090cf4fffe439978194c08"
	I0526 21:25:10.710551  527485 command_runner.go:124] ! I0526 21:23:31.228401       1 serving.go:331] Generated self-signed cert in-memory
	I0526 21:25:10.710587  527485 command_runner.go:124] ! W0526 21:23:34.792981       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0526 21:25:10.710611  527485 command_runner.go:124] ! W0526 21:23:34.795544       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0526 21:25:10.710629  527485 command_runner.go:124] ! W0526 21:23:34.796410       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0526 21:25:10.710640  527485 command_runner.go:124] ! W0526 21:23:34.796897       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0526 21:25:10.710657  527485 command_runner.go:124] ! I0526 21:23:34.861412       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0526 21:25:10.710678  527485 command_runner.go:124] ! I0526 21:23:34.862415       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0526 21:25:10.710691  527485 command_runner.go:124] ! I0526 21:23:34.861578       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0526 21:25:10.710699  527485 command_runner.go:124] ! I0526 21:23:34.861594       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I0526 21:25:10.710720  527485 command_runner.go:124] ! E0526 21:23:34.865256       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0526 21:25:10.710751  527485 command_runner.go:124] ! E0526 21:23:34.871182       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0526 21:25:10.710780  527485 command_runner.go:124] ! E0526 21:23:34.871367       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0526 21:25:10.710811  527485 command_runner.go:124] ! E0526 21:23:34.871423       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0526 21:25:10.710835  527485 command_runner.go:124] ! E0526 21:23:34.873602       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0526 21:25:10.710862  527485 command_runner.go:124] ! E0526 21:23:34.873877       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0526 21:25:10.710892  527485 command_runner.go:124] ! E0526 21:23:34.874313       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0526 21:25:10.710912  527485 command_runner.go:124] ! E0526 21:23:34.874540       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0526 21:25:10.710961  527485 command_runner.go:124] ! E0526 21:23:34.875162       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0526 21:25:10.710987  527485 command_runner.go:124] ! E0526 21:23:34.875282       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0526 21:25:10.711011  527485 command_runner.go:124] ! E0526 21:23:34.878224       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0526 21:25:10.711037  527485 command_runner.go:124] ! E0526 21:23:34.878386       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0526 21:25:10.711065  527485 command_runner.go:124] ! E0526 21:23:35.699206       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0526 21:25:10.711085  527485 command_runner.go:124] ! E0526 21:23:35.756603       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0526 21:25:10.711116  527485 command_runner.go:124] ! E0526 21:23:35.804897       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0526 21:25:10.711175  527485 command_runner.go:124] ! E0526 21:23:35.812802       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0526 21:25:10.711212  527485 command_runner.go:124] ! E0526 21:23:35.981887       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0526 21:25:10.711241  527485 command_runner.go:124] ! E0526 21:23:36.079577       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0526 21:25:10.711260  527485 command_runner.go:124] ! I0526 21:23:38.862952       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	I0526 21:25:10.715541  527485 logs.go:123] Gathering logs for kube-proxy [de6efc6fec4b20903efb849817a9e57e284841a9fc6ef9bd1b83e563316cdaa2] ...
	I0526 21:25:10.715557  527485 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 de6efc6fec4b20903efb849817a9e57e284841a9fc6ef9bd1b83e563316cdaa2"
	I0526 21:25:10.734573  527485 command_runner.go:124] ! I0526 21:23:54.629702       1 node.go:172] Successfully retrieved node IP: 192.168.39.229
	I0526 21:25:10.734591  527485 command_runner.go:124] ! I0526 21:23:54.629813       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.39.229), assume IPv4 operation
	I0526 21:25:10.734600  527485 command_runner.go:124] ! W0526 21:23:54.677087       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0526 21:25:10.734607  527485 command_runner.go:124] ! I0526 21:23:54.677377       1 server_others.go:185] Using iptables Proxier.
	I0526 21:25:10.734613  527485 command_runner.go:124] ! I0526 21:23:54.678139       1 server.go:650] Version: v1.20.2
	I0526 21:25:10.734624  527485 command_runner.go:124] ! I0526 21:23:54.678560       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
	I0526 21:25:10.734635  527485 command_runner.go:124] ! I0526 21:23:54.678810       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
	I0526 21:25:10.734643  527485 command_runner.go:124] ! I0526 21:23:54.680271       1 config.go:315] Starting service config controller
	I0526 21:25:10.734653  527485 command_runner.go:124] ! I0526 21:23:54.680366       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0526 21:25:10.734661  527485 command_runner.go:124] ! I0526 21:23:54.680391       1 config.go:224] Starting endpoint slice config controller
	I0526 21:25:10.734671  527485 command_runner.go:124] ! I0526 21:23:54.680396       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0526 21:25:10.734679  527485 command_runner.go:124] ! I0526 21:23:54.780835       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0526 21:25:10.734686  527485 command_runner.go:124] ! I0526 21:23:54.780955       1 shared_informer.go:247] Caches are synced for service config 
	I0526 21:25:10.735203  527485 logs.go:123] Gathering logs for storage-provisioner [5d3df8c94eaedafbc7bb1e79012cbc72bc2ec88687f4bfc6f10fff8c1eb3107d] ...
	I0526 21:25:10.735216  527485 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 5d3df8c94eaedafbc7bb1e79012cbc72bc2ec88687f4bfc6f10fff8c1eb3107d"
	I0526 21:25:10.754134  527485 command_runner.go:124] ! I0526 21:24:10.174152       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0526 21:25:10.754160  527485 command_runner.go:124] ! I0526 21:24:10.283423       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0526 21:25:10.754176  527485 command_runner.go:124] ! I0526 21:24:10.285296       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0526 21:25:10.754191  527485 command_runner.go:124] ! I0526 21:24:10.325709       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0526 21:25:10.754214  527485 command_runner.go:124] ! I0526 21:24:10.333080       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_multinode-20210526212238-510955_640f1575-3f2b-423b-9f51-48a3198dc1b4!
	I0526 21:25:10.754248  527485 command_runner.go:124] ! I0526 21:24:10.329407       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"694e5be2-46cf-4c76-aeac-70628468e6a3", APIVersion:"v1", ResourceVersion:"496", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' multinode-20210526212238-510955_640f1575-3f2b-423b-9f51-48a3198dc1b4 became leader
	I0526 21:25:10.754282  527485 command_runner.go:124] ! I0526 21:24:10.440994       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_multinode-20210526212238-510955_640f1575-3f2b-423b-9f51-48a3198dc1b4!
	I0526 21:25:10.755037  527485 logs.go:123] Gathering logs for containerd ...
	I0526 21:25:10.755055  527485 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0526 21:25:10.769442  527485 command_runner.go:124] > -- Logs begin at Wed 2021-05-26 21:22:49 UTC, end at Wed 2021-05-26 21:25:10 UTC. --
	I0526 21:25:10.769468  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 systemd[1]: Starting containerd container runtime...
	I0526 21:25:10.769480  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 systemd[1]: Started containerd container runtime.
	I0526 21:25:10.769499  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.412639957Z" level=info msg="starting containerd" revision=05f951a3781f4f2c1911b05e61c160e9c30eaa8e version=v1.4.4
	I0526 21:25:10.769524  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.454795725Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0526 21:25:10.769544  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.455022736Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0526 21:25:10.769579  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.456819758Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/4.19.182\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0526 21:25:10.769602  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.456940685Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0526 21:25:10.769632  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.457199432Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0526 21:25:10.769665  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.457299817Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0526 21:25:10.769687  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.457342626Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0526 21:25:10.769708  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.457353348Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0526 21:25:10.769731  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.457375564Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0526 21:25:10.769752  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.457518971Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0526 21:25:10.769784  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.457752665Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0526 21:25:10.769804  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.457768067Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0526 21:25:10.769826  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.457801760Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0526 21:25:10.769844  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.457811694Z" level=info msg="metadata content store policy set" policy=shared
	I0526 21:25:10.769869  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.461742670Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0526 21:25:10.769900  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.461851430Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0526 21:25:10.769923  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462036878Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0526 21:25:10.769945  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462069131Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0526 21:25:10.769964  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462082171Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0526 21:25:10.769980  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462094524Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0526 21:25:10.769998  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462115116Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0526 21:25:10.770015  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462127721Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1
	I0526 21:25:10.770031  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462139766Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0526 21:25:10.770046  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462157542Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0526 21:25:10.770064  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462167923Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0526 21:25:10.770077  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462295610Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0526 21:25:10.770092  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462357720Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0526 21:25:10.770106  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462745295Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0526 21:25:10.770118  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462770123Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0526 21:25:10.770132  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462815565Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0526 21:25:10.770147  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462827921Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0526 21:25:10.770162  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462846347Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0526 21:25:10.770175  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462857513Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0526 21:25:10.770189  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462870788Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0526 21:25:10.770205  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462881154Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0526 21:25:10.770562  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462892049Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0526 21:25:10.770614  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462903002Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0526 21:25:10.770642  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462913917Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0526 21:25:10.770670  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462958764Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0526 21:25:10.770698  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462972025Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0526 21:25:10.770728  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462983386Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0526 21:25:10.770754  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462994704Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1
	I0526 21:25:10.770787  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.463133131Z" level=warning msg="failed to load plugin io.containerd.grpc.v1.cri" error="invalid plugin config: `systemd_cgroup` only works for runtime io.containerd.runtime.v1.linux"
	I0526 21:25:10.770814  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.463145276Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0526 21:25:10.770833  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.463363744Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	I0526 21:25:10.770858  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.463401676Z" level=info msg=serving... address=/run/containerd/containerd.sock
	I0526 21:25:10.770882  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.463415404Z" level=info msg="containerd successfully booted in 0.052163s"
	I0526 21:25:10.770900  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 systemd[1]: Stopping containerd container runtime...
	I0526 21:25:10.770913  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 systemd[1]: containerd.service: Succeeded.
	I0526 21:25:10.770931  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 systemd[1]: Stopped containerd container runtime.
	I0526 21:25:10.770944  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 systemd[1]: Starting containerd container runtime...
	I0526 21:25:10.770962  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 systemd[1]: Started containerd container runtime.
	I0526 21:25:10.770983  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.677351233Z" level=info msg="starting containerd" revision=05f951a3781f4f2c1911b05e61c160e9c30eaa8e version=v1.4.4
	I0526 21:25:10.771013  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.703735354Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0526 21:25:10.771044  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.703939180Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0526 21:25:10.771090  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.706070962Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/4.19.182\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0526 21:25:10.771127  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.706222939Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0526 21:25:10.771169  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.706683734Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0526 21:25:10.771197  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.706837938Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0526 21:25:10.771225  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.706963959Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0526 21:25:10.771253  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.707081760Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0526 21:25:10.771281  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.707216688Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0526 21:25:10.771315  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.707381113Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0526 21:25:10.771361  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.707841019Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0526 21:25:10.771389  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.707973506Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0526 21:25:10.771416  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.708095816Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0526 21:25:10.771434  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.708236930Z" level=info msg="metadata content store policy set" policy=shared
	I0526 21:25:10.771465  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.708536776Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0526 21:25:10.771489  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.708698510Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0526 21:25:10.771519  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.708937323Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0526 21:25:10.771545  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.709074999Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0526 21:25:10.771624  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.709196994Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0526 21:25:10.771671  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.709315424Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0526 21:25:10.771707  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.709506686Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0526 21:25:10.771744  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.709629192Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1
	I0526 21:25:10.771862  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.709743913Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0526 21:25:10.771923  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.709857985Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0526 21:25:10.771952  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.709979410Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0526 21:25:10.771979  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.710125076Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0526 21:25:10.772005  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.710271949Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0526 21:25:10.772249  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.710830775Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0526 21:25:10.772284  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.710974791Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0526 21:25:10.772304  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.711117145Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0526 21:25:10.772322  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.711243334Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0526 21:25:10.772337  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.711363735Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0526 21:25:10.772351  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.711549081Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0526 21:25:10.772363  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.711666234Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0526 21:25:10.772377  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.711781506Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0526 21:25:10.772389  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.711895813Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0526 21:25:10.772402  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.712013139Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0526 21:25:10.772415  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.712131897Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0526 21:25:10.772427  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.712269473Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0526 21:25:10.772444  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.712503525Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0526 21:25:10.772456  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.712659007Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0526 21:25:10.772512  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.712779064Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1
	I0526 21:25:10.772533  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.712986218Z" level=warning msg="`default_runtime` is deprecated, please use `default_runtime_name` to reference the default configuration you have defined in `runtimes`"
	I0526 21:25:10.772638  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.713141331Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:default DefaultRuntime:{Type:io.containerd.runc.v2 Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:0xc000155fb0 PrivilegedWithoutHostDevices:false BaseRuntimeSpec:} UntrustedWorkloadRuntime:{Type: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:<nil> PrivilegedWithoutHostDevices:false BaseRuntimeSpec:} Runtimes:map[default:{Type:io.containerd.runc.v2 Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:0xc000155fb0 PrivilegedWithoutHostDevices:false BaseRuntimeSpec:} runc:{Type:io.containerd.runc.v2 Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:0xc00037b050 PrivilegedWithoutHostDevices:false BaseRuntimeSpec:}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpac
kedLayers:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.mk NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate:} Registry:{Mirrors:map[docker.io:{Endpoints:[https://registry-1.docker.io]}] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:} DisableTCPService:true StreamServerAddress: StreamServerPort:10010 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:k8s.gcr.io/pause:3.2 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true IgnoreImageDefinedVolumes:false} ContainerdRootDir:/mnt/vda1/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/mnt/vda1/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/r
un/containerd/io.containerd.grpc.v1.cri}"
	I0526 21:25:10.772659  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.713322225Z" level=info msg="Connect containerd service"
	I0526 21:25:10.772672  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.713538361Z" level=info msg="Get image filesystem path \"/mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\""
	I0526 21:25:10.772693  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.714213931Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.mk: cni plugin not initialized: failed to load cni config"
	I0526 21:25:10.772708  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.714359921Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0526 21:25:10.772723  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.714868242Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	I0526 21:25:10.772734  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.715023618Z" level=info msg=serving... address=/run/containerd/containerd.sock
	I0526 21:25:10.772747  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.715142631Z" level=info msg="containerd successfully booted in 0.038760s"
	I0526 21:25:10.772760  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.726087774Z" level=info msg="Start subscribing containerd event"
	I0526 21:25:10.772778  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.726733995Z" level=info msg="Start recovering state"
	I0526 21:25:10.772791  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.781395051Z" level=info msg="Start event monitor"
	I0526 21:25:10.772803  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.781771001Z" level=info msg="Start snapshots syncer"
	I0526 21:25:10.772816  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.781893491Z" level=info msg="Start cni network conf syncer"
	I0526 21:25:10.772828  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.782003464Z" level=info msg="Start streaming server"
	I0526 21:25:10.772848  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:28.484581294Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:kube-controller-manager-multinode-20210526212238-510955,Uid:474c55dfb64741cc485e46b6bb9f2dc0,Namespace:kube-system,Attempt:0,}"
	I0526 21:25:10.772886  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:28.490843770Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:kube-scheduler-multinode-20210526212238-510955,Uid:6b4a0ee8b3d15a1c2e47c15d32e6eb0d,Namespace:kube-system,Attempt:0,}"
	I0526 21:25:10.772916  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:28.501056680Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:kube-apiserver-multinode-20210526212238-510955,Uid:b42b6879229f245abab6047de8662a2f,Namespace:kube-system,Attempt:0,}"
	I0526 21:25:10.772936  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:28.508591647Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:etcd-multinode-20210526212238-510955,Uid:34530b4d5ce1b17919f3b8976b2d0456,Namespace:kube-system,Attempt:0,}"
	I0526 21:25:10.772955  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:28.580716340Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fe43674906f2080850da99c25995a18c2583bfda5a6a21d58f51cb45f673d486 pid=2407
	I0526 21:25:10.772975  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:28.598809833Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/24fd8b8599a6ee5e09c19d4ce15908360ea29418f2bbe3b0ba2d12f73a3519fb pid=2435
	I0526 21:25:10.773004  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:28.602060491Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/73ada73fbbf0b2a7b4a40791347e9a5a366e1f52a347203f20a27bcb2813b6c5 pid=2434
	I0526 21:25:10.773028  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:28.602007310Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2ad404c6a9c449ae1ebfab12355673229979a8ee4cf4d87f94b5ca073d31b43e pid=2452
	I0526 21:25:10.773050  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:29.066808539Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-multinode-20210526212238-510955,Uid:b42b6879229f245abab6047de8662a2f,Namespace:kube-system,Attempt:0,} returns sandbox id \"fe43674906f2080850da99c25995a18c2583bfda5a6a21d58f51cb45f673d486\""
	I0526 21:25:10.773076  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:29.074803022Z" level=info msg="CreateContainer within sandbox \"fe43674906f2080850da99c25995a18c2583bfda5a6a21d58f51cb45f673d486\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}"
	I0526 21:25:10.773109  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:29.194718464Z" level=info msg="CreateContainer within sandbox \"fe43674906f2080850da99c25995a18c2583bfda5a6a21d58f51cb45f673d486\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a0581c0e5409bf38a71c61d4cf776a7fcd3dc38c24a5a6a52165552afd2bc85c\""
	I0526 21:25:10.773129  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:29.196219933Z" level=info msg="StartContainer for \"a0581c0e5409bf38a71c61d4cf776a7fcd3dc38c24a5a6a52165552afd2bc85c\""
	I0526 21:25:10.773160  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:29.262678371Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-multinode-20210526212238-510955,Uid:474c55dfb64741cc485e46b6bb9f2dc0,Namespace:kube-system,Attempt:0,} returns sandbox id \"73ada73fbbf0b2a7b4a40791347e9a5a366e1f52a347203f20a27bcb2813b6c5\""
	I0526 21:25:10.773188  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:29.272571919Z" level=info msg="CreateContainer within sandbox \"73ada73fbbf0b2a7b4a40791347e9a5a366e1f52a347203f20a27bcb2813b6c5\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}"
	I0526 21:25:10.773220  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:29.347228547Z" level=info msg="CreateContainer within sandbox \"73ada73fbbf0b2a7b4a40791347e9a5a366e1f52a347203f20a27bcb2813b6c5\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"2314e41b1b44395e7b5a7c32d6859941d4a5db601171f0dd18ddefb571d25f18\""
	I0526 21:25:10.773243  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:29.349365690Z" level=info msg="StartContainer for \"2314e41b1b44395e7b5a7c32d6859941d4a5db601171f0dd18ddefb571d25f18\""
	I0526 21:25:10.773267  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:29.419043703Z" level=info msg="StartContainer for \"a0581c0e5409bf38a71c61d4cf776a7fcd3dc38c24a5a6a52165552afd2bc85c\" returns successfully"
	I0526 21:25:10.773304  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:29.520520792Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-multinode-20210526212238-510955,Uid:6b4a0ee8b3d15a1c2e47c15d32e6eb0d,Namespace:kube-system,Attempt:0,} returns sandbox id \"24fd8b8599a6ee5e09c19d4ce15908360ea29418f2bbe3b0ba2d12f73a3519fb\""
	I0526 21:25:10.773328  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:29.527415671Z" level=info msg="CreateContainer within sandbox \"24fd8b8599a6ee5e09c19d4ce15908360ea29418f2bbe3b0ba2d12f73a3519fb\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}"
	I0526 21:25:10.773361  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:29.566421321Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:etcd-multinode-20210526212238-510955,Uid:34530b4d5ce1b17919f3b8976b2d0456,Namespace:kube-system,Attempt:0,} returns sandbox id \"2ad404c6a9c449ae1ebfab12355673229979a8ee4cf4d87f94b5ca073d31b43e\""
	I0526 21:25:10.773388  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:29.575850717Z" level=info msg="CreateContainer within sandbox \"2ad404c6a9c449ae1ebfab12355673229979a8ee4cf4d87f94b5ca073d31b43e\" for container &ContainerMetadata{Name:etcd,Attempt:0,}"
	I0526 21:25:10.773419  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:29.621335319Z" level=info msg="CreateContainer within sandbox \"24fd8b8599a6ee5e09c19d4ce15908360ea29418f2bbe3b0ba2d12f73a3519fb\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e6bb9bee7539aa8f41eb665706dd077c17b8c28a54090cf4fffe439978194c08\""
	I0526 21:25:10.773446  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:29.623169879Z" level=info msg="StartContainer for \"e6bb9bee7539aa8f41eb665706dd077c17b8c28a54090cf4fffe439978194c08\""
	I0526 21:25:10.773470  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:29.681255114Z" level=info msg="StartContainer for \"2314e41b1b44395e7b5a7c32d6859941d4a5db601171f0dd18ddefb571d25f18\" returns successfully"
	I0526 21:25:10.773498  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:29.683704929Z" level=info msg="CreateContainer within sandbox \"2ad404c6a9c449ae1ebfab12355673229979a8ee4cf4d87f94b5ca073d31b43e\" for &ContainerMetadata{Name:etcd,Attempt:0,} returns container id \"c8538106e966bebc2d596c1688fdb11c413a37e122bd18c454b4db5bec7f55ad\""
	I0526 21:25:10.773522  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:29.684577023Z" level=info msg="StartContainer for \"c8538106e966bebc2d596c1688fdb11c413a37e122bd18c454b4db5bec7f55ad\""
	I0526 21:25:10.773546  527485 command_runner.go:124] > May 26 21:23:30 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:30.017920282Z" level=info msg="StartContainer for \"c8538106e966bebc2d596c1688fdb11c413a37e122bd18c454b4db5bec7f55ad\" returns successfully"
	I0526 21:25:10.773564  527485 command_runner.go:124] > May 26 21:23:30 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:30.056525418Z" level=info msg="StartContainer for \"e6bb9bee7539aa8f41eb665706dd077c17b8c28a54090cf4fffe439978194c08\" returns successfully"
	I0526 21:25:10.773635  527485 command_runner.go:124] > May 26 21:23:53 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:53.290788536Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
	I0526 21:25:10.773663  527485 command_runner.go:124] > May 26 21:23:53 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:53.802102062Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:kindnet-2wgbs,Uid:aac3ff91-8f9c-4f4e-81fc-a859f780d67d,Namespace:kube-system,Attempt:0,}"
	I0526 21:25:10.773691  527485 command_runner.go:124] > May 26 21:23:53 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:53.839975209Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/53490c652b9e5b3b552f7ca74d9a84e9c42a1849a932e5f024b22d340d5734e8 pid=2987
	I0526 21:25:10.773715  527485 command_runner.go:124] > May 26 21:23:53 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:53.915628984Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:kube-proxy-qbl42,Uid:950a915d-c5f0-4e6f-bc12-ee97013032f0,Namespace:kube-system,Attempt:0,}"
	I0526 21:25:10.773738  527485 command_runner.go:124] > May 26 21:23:53 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:53.950847165Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/038c42970362d9798abb36c3983856aa352e67a59ca9ce5f3e1852c03634a59a pid=3013
	I0526 21:25:10.773769  527485 command_runner.go:124] > May 26 21:23:54 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:54.116312794Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qbl42,Uid:950a915d-c5f0-4e6f-bc12-ee97013032f0,Namespace:kube-system,Attempt:0,} returns sandbox id \"038c42970362d9798abb36c3983856aa352e67a59ca9ce5f3e1852c03634a59a\""
	I0526 21:25:10.773796  527485 command_runner.go:124] > May 26 21:23:54 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:54.127305490Z" level=info msg="CreateContainer within sandbox \"038c42970362d9798abb36c3983856aa352e67a59ca9ce5f3e1852c03634a59a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}"
	I0526 21:25:10.773828  527485 command_runner.go:124] > May 26 21:23:54 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:54.182202148Z" level=info msg="CreateContainer within sandbox \"038c42970362d9798abb36c3983856aa352e67a59ca9ce5f3e1852c03634a59a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"de6efc6fec4b20903efb849817a9e57e284841a9fc6ef9bd1b83e563316cdaa2\""
	I0526 21:25:10.773851  527485 command_runner.go:124] > May 26 21:23:54 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:54.188910123Z" level=info msg="StartContainer for \"de6efc6fec4b20903efb849817a9e57e284841a9fc6ef9bd1b83e563316cdaa2\""
	I0526 21:25:10.773875  527485 command_runner.go:124] > May 26 21:23:54 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:54.381612238Z" level=info msg="StartContainer for \"de6efc6fec4b20903efb849817a9e57e284841a9fc6ef9bd1b83e563316cdaa2\" returns successfully"
	I0526 21:25:10.773900  527485 command_runner.go:124] > May 26 21:23:55 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:55.674364903Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/library/minikube-local-cache-test:functional-20210526211257-510955,Labels:map[string]string{},XXX_unrecognized:[],}"
	I0526 21:25:10.773928  527485 command_runner.go:124] > May 26 21:23:55 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:55.683119285Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d019ff3187ef5660d1df17b8caf469d5fc50b72267134348e040397c4d49d830,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
	I0526 21:25:10.773960  527485 command_runner.go:124] > May 26 21:23:55 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:55.683711665Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/library/minikube-local-cache-test:functional-20210526211257-510955,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
	I0526 21:25:10.773983  527485 command_runner.go:124] > May 26 21:23:56 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:56.582858367Z" level=error msg="get state for 53490c652b9e5b3b552f7ca74d9a84e9c42a1849a932e5f024b22d340d5734e8" error="context deadline exceeded: unknown"
	I0526 21:25:10.773999  527485 command_runner.go:124] > May 26 21:23:56 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:56.582967226Z" level=warning msg="unknown status" status=0
	I0526 21:25:10.774031  527485 command_runner.go:124] > May 26 21:23:56 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:56.969753374Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kindnet-2wgbs,Uid:aac3ff91-8f9c-4f4e-81fc-a859f780d67d,Namespace:kube-system,Attempt:0,} returns sandbox id \"53490c652b9e5b3b552f7ca74d9a84e9c42a1849a932e5f024b22d340d5734e8\""
	I0526 21:25:10.774059  527485 command_runner.go:124] > May 26 21:23:56 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:56.975070195Z" level=info msg="CreateContainer within sandbox \"53490c652b9e5b3b552f7ca74d9a84e9c42a1849a932e5f024b22d340d5734e8\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:0,}"
	I0526 21:25:10.774086  527485 command_runner.go:124] > May 26 21:23:57 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:57.027887855Z" level=info msg="CreateContainer within sandbox \"53490c652b9e5b3b552f7ca74d9a84e9c42a1849a932e5f024b22d340d5734e8\" for &ContainerMetadata{Name:kindnet-cni,Attempt:0,} returns container id \"69df1859ce4d1a30c4660b7f63cb09e13d69f3813d39620e6ca8dc830b4388bf\""
	I0526 21:25:10.774108  527485 command_runner.go:124] > May 26 21:23:57 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:57.029566085Z" level=info msg="StartContainer for \"69df1859ce4d1a30c4660b7f63cb09e13d69f3813d39620e6ca8dc830b4388bf\""
	I0526 21:25:10.774136  527485 command_runner.go:124] > May 26 21:23:57 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:57.574608517Z" level=info msg="StartContainer for \"69df1859ce4d1a30c4660b7f63cb09e13d69f3813d39620e6ca8dc830b4388bf\" returns successfully"
	I0526 21:25:10.774158  527485 command_runner.go:124] > May 26 21:24:09 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:24:09.297649575Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:storage-provisioner,Uid:e29f2487-1ac5-4bb4-9ab8-6b5d5c43ec36,Namespace:kube-system,Attempt:0,}"
	I0526 21:25:10.774183  527485 command_runner.go:124] > May 26 21:24:09 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:24:09.323344186Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:coredns-74ff55c5b-tw67b,Uid:a0522c32-9960-4c21-8a5a-d0b137009166,Namespace:kube-system,Attempt:0,}"
	I0526 21:25:10.774211  527485 command_runner.go:124] > May 26 21:24:09 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:24:09.332120092Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/722b1b257c571a73cb4686c5476aba37030267ae95e826e47362be9c166adb55 pid=3313
	I0526 21:25:10.774238  527485 command_runner.go:124] > May 26 21:24:09 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:24:09.442356819Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1d96eb581f035bbd8a09d1caefefe610196dd7fb21d1b74e5f155bddc0a54900 pid=3376
	I0526 21:25:10.774269  527485 command_runner.go:124] > May 26 21:24:09 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:24:09.792546853Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:storage-provisioner,Uid:e29f2487-1ac5-4bb4-9ab8-6b5d5c43ec36,Namespace:kube-system,Attempt:0,} returns sandbox id \"722b1b257c571a73cb4686c5476aba37030267ae95e826e47362be9c166adb55\""
	I0526 21:25:10.774301  527485 command_runner.go:124] > May 26 21:24:09 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:24:09.796339883Z" level=info msg="CreateContainer within sandbox \"722b1b257c571a73cb4686c5476aba37030267ae95e826e47362be9c166adb55\" for container &ContainerMetadata{Name:storage-provisioner,Attempt:0,}"
	I0526 21:25:10.774329  527485 command_runner.go:124] > May 26 21:24:09 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:24:09.843281999Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-74ff55c5b-tw67b,Uid:a0522c32-9960-4c21-8a5a-d0b137009166,Namespace:kube-system,Attempt:0,} returns sandbox id \"1d96eb581f035bbd8a09d1caefefe610196dd7fb21d1b74e5f155bddc0a54900\""
	I0526 21:25:10.774358  527485 command_runner.go:124] > May 26 21:24:09 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:24:09.849108598Z" level=info msg="CreateContainer within sandbox \"1d96eb581f035bbd8a09d1caefefe610196dd7fb21d1b74e5f155bddc0a54900\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
	I0526 21:25:10.774392  527485 command_runner.go:124] > May 26 21:24:09 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:24:09.875948742Z" level=info msg="CreateContainer within sandbox \"722b1b257c571a73cb4686c5476aba37030267ae95e826e47362be9c166adb55\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"5d3df8c94eaedafbc7bb1e79012cbc72bc2ec88687f4bfc6f10fff8c1eb3107d\""
	I0526 21:25:10.774411  527485 command_runner.go:124] > May 26 21:24:09 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:24:09.879073015Z" level=info msg="StartContainer for \"5d3df8c94eaedafbc7bb1e79012cbc72bc2ec88687f4bfc6f10fff8c1eb3107d\""
	I0526 21:25:10.774443  527485 command_runner.go:124] > May 26 21:24:09 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:24:09.915826719Z" level=info msg="CreateContainer within sandbox \"1d96eb581f035bbd8a09d1caefefe610196dd7fb21d1b74e5f155bddc0a54900\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a9593dff4428d4d0f5cc31832cefb42bf39b62a98d32d295dd9502a1ef6d307a\""
	I0526 21:25:10.774468  527485 command_runner.go:124] > May 26 21:24:09 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:24:09.918179651Z" level=info msg="StartContainer for \"a9593dff4428d4d0f5cc31832cefb42bf39b62a98d32d295dd9502a1ef6d307a\""
	I0526 21:25:10.774491  527485 command_runner.go:124] > May 26 21:24:10 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:24:10.083539707Z" level=info msg="StartContainer for \"5d3df8c94eaedafbc7bb1e79012cbc72bc2ec88687f4bfc6f10fff8c1eb3107d\" returns successfully"
	I0526 21:25:10.774511  527485 command_runner.go:124] > May 26 21:24:10 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:24:10.120722012Z" level=info msg="StartContainer for \"a9593dff4428d4d0f5cc31832cefb42bf39b62a98d32d295dd9502a1ef6d307a\" returns successfully"
	I0526 21:25:10.791687  527485 logs.go:123] Gathering logs for kubelet ...
	I0526 21:25:10.791704  527485 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0526 21:25:10.816785  527485 command_runner.go:124] > -- Logs begin at Wed 2021-05-26 21:22:49 UTC, end at Wed 2021-05-26 21:25:10 UTC. --
	I0526 21:25:10.816809  527485 command_runner.go:124] > May 26 21:23:21 multinode-20210526212238-510955 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0526 21:25:10.816837  527485 command_runner.go:124] > May 26 21:23:21 multinode-20210526212238-510955 kubelet[2343]: Flag --runtime-request-timeout has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	I0526 21:25:10.816878  527485 command_runner.go:124] > May 26 21:23:21 multinode-20210526212238-510955 kubelet[2343]: Flag --runtime-request-timeout has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	I0526 21:25:10.816898  527485 command_runner.go:124] > May 26 21:23:22 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:22.365155    2343 server.go:416] Version: v1.20.2
	I0526 21:25:10.816915  527485 command_runner.go:124] > May 26 21:23:22 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:22.365664    2343 server.go:837] Client rotation is on, will bootstrap in background
	I0526 21:25:10.816937  527485 command_runner.go:124] > May 26 21:23:22 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:22.382328    2343 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	I0526 21:25:10.816971  527485 command_runner.go:124] > May 26 21:23:22 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:22.383887    2343 certificate_manager.go:437] Failed while requesting a signed certificate from the master: cannot create certificate signing request: Post "https://control-plane.minikube.internal:8443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:10.817007  527485 command_runner.go:124] > May 26 21:23:24 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:24.586559    2343 certificate_manager.go:437] Failed while requesting a signed certificate from the master: cannot create certificate signing request: Post "https://control-plane.minikube.internal:8443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:10.817034  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.392858    2343 server.go:645] --cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /
	I0526 21:25:10.817055  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.393993    2343 container_manager_linux.go:274] container manager verified user specified cgroup-root exists: []
	I0526 21:25:10.817119  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.394298    2343 container_manager_linux.go:279] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:remote CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalTopologyManagerScope:container ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none}
	I0526 21:25:10.817142  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.395126    2343 topology_manager.go:120] [topologymanager] Creating topology manager with none policy per container scope
	I0526 21:25:10.817161  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.395334    2343 container_manager_linux.go:310] [topologymanager] Initializing Topology Manager with none policy and container-level scope
	I0526 21:25:10.817179  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.395348    2343 container_manager_linux.go:315] Creating device plugin manager: true
	I0526 21:25:10.817195  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.395816    2343 remote_runtime.go:62] parsed scheme: ""
	I0526 21:25:10.817219  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.395929    2343 remote_runtime.go:62] scheme "" not registered, fallback to default scheme
	I0526 21:25:10.817244  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.396315    2343 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}
	I0526 21:25:10.817262  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.396571    2343 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0526 21:25:10.817283  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.397666    2343 remote_image.go:50] parsed scheme: ""
	I0526 21:25:10.817303  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.397691    2343 remote_image.go:50] scheme "" not registered, fallback to default scheme
	I0526 21:25:10.817326  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.397829    2343 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}
	I0526 21:25:10.817342  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.397957    2343 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0526 21:25:10.817355  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.400786    2343 kubelet.go:262] Adding pod path: /etc/kubernetes/manifests
	I0526 21:25:10.817364  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.401761    2343 kubelet.go:273] Watching apiserver
	I0526 21:25:10.817388  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:27.419726    2343 reflector.go:138] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://control-plane.minikube.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%!D(MISSING)multinode-20210526212238-510955&limit=500&resourceVersion=0": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:10.817413  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:27.433343    2343 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:10.817430  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.434846    2343 kuberuntime_manager.go:216] Container runtime containerd initialized, version: v1.4.4, apiVersion: v1alpha2
	I0526 21:25:10.817454  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:27.435179    2343 reflector.go:138] k8s.io/kubernetes/pkg/kubelet/kubelet.go:438: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-20210526212238-510955&limit=500&resourceVersion=0": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:10.817469  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:27.695431    2343 aws_credentials.go:77] while getting AWS credentials NoCredentialProviders: no valid providers in chain. Deprecated.
	I0526 21:25:10.817480  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]:         For verbose messaging see aws.Config.CredentialsChainVerboseErrors
	I0526 21:25:10.817494  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: W0526 21:23:27.696850    2343 probe.go:268] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
	I0526 21:25:10.817506  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.698714    2343 server.go:1176] Started kubelet
	I0526 21:25:10.817516  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.699681    2343 server.go:148] Starting to listen on 0.0.0.0:10250
	I0526 21:25:10.817528  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.701131    2343 server.go:410] Adding debug handlers to kubelet server.
	I0526 21:25:10.817610  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:27.701698    2343 event.go:273] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"multinode-20210526212238-510955.1682bacd86c17a5a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"multinode-20210526212238-510955", UID:"multinode-20210526212238-510955", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.",
Source:v1.EventSource{Component:"kubelet", Host:"multinode-20210526212238-510955"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc023ccf3e9a5245a, ext:5868438524, loc:(*time.Location)(0x70d1080)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc023ccf3e9a5245a, ext:5868438524, loc:(*time.Location)(0x70d1080)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events": dial tcp 192.168.39.229:8443: connect: connection refused'(may retry after sleeping)
	I0526 21:25:10.817626  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.703923    2343 fs_resource_analyzer.go:64] Starting FS ResourceAnalyzer
	I0526 21:25:10.817637  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.707734    2343 volume_manager.go:271] Starting Kubelet Volume Manager
	I0526 21:25:10.817649  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.708096    2343 desired_state_of_world_populator.go:142] Desired state populator starts to run
	I0526 21:25:10.817680  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:27.708889    2343 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:10.817719  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:27.709701    2343 controller.go:144] failed to ensure lease exists, will retry in 200ms, error: Get "https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-20210526212238-510955?timeout=10s": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:10.817746  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:27.711040    2343 kubelet.go:2163] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	I0526 21:25:10.817763  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.711583    2343 client.go:86] parsed scheme: "unix"
	I0526 21:25:10.817779  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.711779    2343 client.go:86] scheme "unix" not registered, fallback to default scheme
	I0526 21:25:10.817796  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.712280    2343 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}
	I0526 21:25:10.817808  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.712673    2343 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0526 21:25:10.817820  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.782226    2343 cpu_manager.go:193] [cpumanager] starting with none policy
	I0526 21:25:10.817833  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.782318    2343 cpu_manager.go:194] [cpumanager] reconciling every 10s
	I0526 21:25:10.817852  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.782638    2343 state_mem.go:36] [cpumanager] initializing new in-memory state store
	I0526 21:25:10.817952  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:27.799125    2343 event.go:273] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"multinode-20210526212238-510955.1682bacd86c17a5a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"multinode-20210526212238-510955", UID:"multinode-20210526212238-510955", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.",
Source:v1.EventSource{Component:"kubelet", Host:"multinode-20210526212238-510955"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc023ccf3e9a5245a, ext:5868438524, loc:(*time.Location)(0x70d1080)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc023ccf3e9a5245a, ext:5868438524, loc:(*time.Location)(0x70d1080)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events": dial tcp 192.168.39.229:8443: connect: connection refused'(may retry after sleeping)
	I0526 21:25:10.817986  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.809183    2343 kubelet_node_status.go:71] Attempting to register node multinode-20210526212238-510955
	I0526 21:25:10.818004  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:27.810505    2343 kubelet_node_status.go:93] Unable to register node "multinode-20210526212238-510955" with API server: Post "https://control-plane.minikube.internal:8443/api/v1/nodes": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:10.818016  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:27.810636    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:10.818030  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.876097    2343 kubelet_network_linux.go:56] Initialized IPv4 iptables rules.
	I0526 21:25:10.818043  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.876127    2343 status_manager.go:158] Starting to sync pod status with apiserver
	I0526 21:25:10.818056  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.876145    2343 kubelet.go:1802] Starting kubelet main sync loop.
	I0526 21:25:10.818074  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:27.876191    2343 kubelet.go:1826] skipping pod synchronization - [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
	I0526 21:25:10.818097  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:27.877853    2343 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:10.818120  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:27.910604    2343 controller.go:144] failed to ensure lease exists, will retry in 400ms, error: Get "https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-20210526212238-510955?timeout=10s": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:10.818134  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:27.910787    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:10.818151  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:27.976408    2343 kubelet.go:1826] skipping pod synchronization - container runtime status check may not have completed yet
	I0526 21:25:10.818165  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.987845    2343 policy_none.go:43] [cpumanager] none policy: Start
	I0526 21:25:10.818204  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: W0526 21:23:28.000709    2343 manager.go:594] Failed to retrieve checkpoint for "kubelet_internal_checkpoint": checkpoint is not found
	I0526 21:25:10.818223  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:28.001042    2343 plugin_manager.go:114] Starting Kubelet Plugin Manager
	I0526 21:25:10.818247  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:28.004395    2343 eviction_manager.go:260] eviction manager: failed to get summary stats: failed to get node info: node "multinode-20210526212238-510955" not found
	I0526 21:25:10.818267  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:28.010900    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:10.818293  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:28.011906    2343 kubelet_node_status.go:71] Attempting to register node multinode-20210526212238-510955
	I0526 21:25:10.818323  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:28.012281    2343 kubelet_node_status.go:93] Unable to register node "multinode-20210526212238-510955" with API server: Post "https://control-plane.minikube.internal:8443/api/v1/nodes": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:10.818343  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:28.111839    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:10.818362  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:28.177382    2343 topology_manager.go:187] [topologymanager] Topology Admit Handler
	I0526 21:25:10.818382  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:28.180087    2343 topology_manager.go:187] [topologymanager] Topology Admit Handler
	I0526 21:25:10.818402  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:28.181373    2343 topology_manager.go:187] [topologymanager] Topology Admit Handler
	I0526 21:25:10.818421  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:28.182941    2343 topology_manager.go:187] [topologymanager] Topology Admit Handler
	I0526 21:25:10.818459  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: W0526 21:23:28.185069    2343 status_manager.go:550] Failed to get status for pod "kube-controller-manager-multinode-20210526212238-510955_kube-system(474c55dfb64741cc485e46b6bb9f2dc0)": Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20210526212238-510955": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:10.818497  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: W0526 21:23:28.185417    2343 status_manager.go:550] Failed to get status for pod "kube-scheduler-multinode-20210526212238-510955_kube-system(6b4a0ee8b3d15a1c2e47c15d32e6eb0d)": Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-20210526212238-510955": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:10.818538  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: W0526 21:23:28.201047    2343 status_manager.go:550] Failed to get status for pod "kube-apiserver-multinode-20210526212238-510955_kube-system(b42b6879229f245abab6047de8662a2f)": Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-20210526212238-510955": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:10.818574  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: W0526 21:23:28.202364    2343 status_manager.go:550] Failed to get status for pod "etcd-multinode-20210526212238-510955_kube-system(34530b4d5ce1b17919f3b8976b2d0456)": Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:10.818594  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:28.212215    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:10.818621  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:28.309602    2343 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/b42b6879229f245abab6047de8662a2f-ca-certs") pod "kube-apiserver-multinode-20210526212238-510955" (UID: "b42b6879229f245abab6047de8662a2f")
	I0526 21:25:10.818643  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:28.309839    2343 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/b42b6879229f245abab6047de8662a2f-k8s-certs") pod "kube-apiserver-multinode-20210526212238-510955" (UID: "b42b6879229f245abab6047de8662a2f")
	I0526 21:25:10.818673  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:28.310062    2343 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/b42b6879229f245abab6047de8662a2f-usr-share-ca-certificates") pod "kube-apiserver-multinode-20210526212238-510955" (UID: "b42b6879229f245abab6047de8662a2f")
	I0526 21:25:10.818706  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:28.310275    2343 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/474c55dfb64741cc485e46b6bb9f2dc0-ca-certs") pod "kube-controller-manager-multinode-20210526212238-510955" (UID: "474c55dfb64741cc485e46b6bb9f2dc0")
	I0526 21:25:10.818740  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:28.310572    2343 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/474c55dfb64741cc485e46b6bb9f2dc0-k8s-certs") pod "kube-controller-manager-multinode-20210526212238-510955" (UID: "474c55dfb64741cc485e46b6bb9f2dc0")
	I0526 21:25:10.818772  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:28.310900    2343 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/474c55dfb64741cc485e46b6bb9f2dc0-kubeconfig") pod "kube-controller-manager-multinode-20210526212238-510955" (UID: "474c55dfb64741cc485e46b6bb9f2dc0")
	I0526 21:25:10.818805  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:28.311066    2343 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-certs" (UniqueName: "kubernetes.io/host-path/34530b4d5ce1b17919f3b8976b2d0456-etcd-certs") pod "etcd-multinode-20210526212238-510955" (UID: "34530b4d5ce1b17919f3b8976b2d0456")
	I0526 21:25:10.818834  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:28.311200    2343 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "flexvolume-dir" (UniqueName: "kubernetes.io/host-path/474c55dfb64741cc485e46b6bb9f2dc0-flexvolume-dir") pod "kube-controller-manager-multinode-20210526212238-510955" (UID: "474c55dfb64741cc485e46b6bb9f2dc0")
	I0526 21:25:10.818858  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:28.311326    2343 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/474c55dfb64741cc485e46b6bb9f2dc0-usr-share-ca-certificates") pod "kube-controller-manager-multinode-20210526212238-510955" (UID: "474c55dfb64741cc485e46b6bb9f2dc0")
	I0526 21:25:10.818884  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:28.311324    2343 controller.go:144] failed to ensure lease exists, will retry in 800ms, error: Get "https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-20210526212238-510955?timeout=10s": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:10.818909  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:28.311643    2343 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/6b4a0ee8b3d15a1c2e47c15d32e6eb0d-kubeconfig") pod "kube-scheduler-multinode-20210526212238-510955" (UID: "6b4a0ee8b3d15a1c2e47c15d32e6eb0d")
	I0526 21:25:10.818931  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:28.311955    2343 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-data" (UniqueName: "kubernetes.io/host-path/34530b4d5ce1b17919f3b8976b2d0456-etcd-data") pod "etcd-multinode-20210526212238-510955" (UID: "34530b4d5ce1b17919f3b8976b2d0456")
	I0526 21:25:10.818944  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:28.312763    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:10.818969  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:28.318006    2343 reflector.go:138] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://control-plane.minikube.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%!D(MISSING)multinode-20210526212238-510955&limit=500&resourceVersion=0": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:10.818993  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:28.361617    2343 reflector.go:138] k8s.io/kubernetes/pkg/kubelet/kubelet.go:438: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-20210526212238-510955&limit=500&resourceVersion=0": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:10.819006  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:28.412938    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:10.819019  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:28.414299    2343 kubelet_node_status.go:71] Attempting to register node multinode-20210526212238-510955
	I0526 21:25:10.819041  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:28.420140    2343 kubelet_node_status.go:93] Unable to register node "multinode-20210526212238-510955" with API server: Post "https://control-plane.minikube.internal:8443/api/v1/nodes": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:10.819055  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:28.513925    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:10.819068  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:28.614235    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:10.819090  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:28.620010    2343 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:10.819103  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:28.714407    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:10.819124  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:28.717664    2343 certificate_manager.go:437] Failed while requesting a signed certificate from the master: cannot create certificate signing request: Post "https://control-plane.minikube.internal:8443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:10.819137  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:28.815037    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:10.819159  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:28.819848    2343 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:10.819172  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:28.915364    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:10.819186  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:29.015843    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:10.819212  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:29.112804    2343 controller.go:144] failed to ensure lease exists, will retry in 1.6s, error: Get "https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-20210526212238-510955?timeout=10s": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:10.819226  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:29.116234    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:10.819240  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:29.217167    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:10.819263  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:29.219890    2343 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:10.819282  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:29.223096    2343 kubelet_node_status.go:71] Attempting to register node multinode-20210526212238-510955
	I0526 21:25:10.819296  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:29.317528    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:10.819306  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:29.418231    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:10.819326  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:29.419707    2343 kubelet_node_status.go:93] Unable to register node "multinode-20210526212238-510955" with API server: Post "https://control-plane.minikube.internal:8443/api/v1/nodes": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:10.819339  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:29.520018    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:10.819352  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:29.620736    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:10.819364  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:29.721115    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:10.819376  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:29.821411    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:10.819388  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:29.921772    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:10.819398  527485 command_runner.go:124] > May 26 21:23:30 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:30.022147    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:10.819412  527485 command_runner.go:124] > May 26 21:23:30 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:30.122970    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:10.819424  527485 command_runner.go:124] > May 26 21:23:30 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:30.223407    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:10.819437  527485 command_runner.go:124] > May 26 21:23:30 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:30.323609    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:10.819451  527485 command_runner.go:124] > May 26 21:23:30 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:30.424033    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:10.819465  527485 command_runner.go:124] > May 26 21:23:30 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:30.524613    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:10.819477  527485 command_runner.go:124] > May 26 21:23:30 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:30.625186    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:10.819489  527485 command_runner.go:124] > May 26 21:23:30 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:30.725563    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:10.819501  527485 command_runner.go:124] > May 26 21:23:30 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:30.826076    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:10.819516  527485 command_runner.go:124] > May 26 21:23:30 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:30.932677    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:10.819529  527485 command_runner.go:124] > May 26 21:23:31 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:31.021296    2343 kubelet_node_status.go:71] Attempting to register node multinode-20210526212238-510955
	I0526 21:25:10.819568  527485 command_runner.go:124] > May 26 21:23:31 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:31.033185    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:10.819581  527485 command_runner.go:124] > May 26 21:23:31 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:31.133540    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:10.819592  527485 command_runner.go:124] > May 26 21:23:31 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:31.234158    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:10.819605  527485 command_runner.go:124] > May 26 21:23:31 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:31.334934    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:10.819618  527485 command_runner.go:124] > May 26 21:23:31 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:31.435265    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:10.819630  527485 command_runner.go:124] > May 26 21:23:31 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:31.535646    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:10.819643  527485 command_runner.go:124] > May 26 21:23:31 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:31.636091    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:10.819656  527485 command_runner.go:124] > May 26 21:23:31 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:31.736769    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:10.819675  527485 command_runner.go:124] > May 26 21:23:31 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:31.837337    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:10.819692  527485 command_runner.go:124] > May 26 21:23:31 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:31.937851    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:10.819711  527485 command_runner.go:124] > May 26 21:23:32 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:32.038171    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:10.819730  527485 command_runner.go:124] > May 26 21:23:32 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:32.138719    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:10.819750  527485 command_runner.go:124] > May 26 21:23:32 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:32.239058    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:10.819769  527485 command_runner.go:124] > May 26 21:23:32 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:32.339598    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:10.819782  527485 command_runner.go:124] > May 26 21:23:32 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:32.440290    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:10.819794  527485 command_runner.go:124] > May 26 21:23:32 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:32.540624    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:10.819806  527485 command_runner.go:124] > May 26 21:23:32 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:32.641006    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:10.819821  527485 command_runner.go:124] > May 26 21:23:32 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:32.741403    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:10.819840  527485 command_runner.go:124] > May 26 21:23:32 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:32.841966    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:10.819860  527485 command_runner.go:124] > May 26 21:23:32 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:32.942585    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:10.819885  527485 command_runner.go:124] > May 26 21:23:33 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:33.002095    2343 kubelet.go:2163] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	I0526 21:25:10.819904  527485 command_runner.go:124] > May 26 21:23:33 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:33.042747    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:10.819921  527485 command_runner.go:124] > May 26 21:23:33 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:33.142869    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:10.819933  527485 command_runner.go:124] > May 26 21:23:33 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:33.243254    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:10.819946  527485 command_runner.go:124] > May 26 21:23:33 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:33.343706    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:10.819957  527485 command_runner.go:124] > May 26 21:23:33 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:33.444105    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:10.819969  527485 command_runner.go:124] > May 26 21:23:33 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:33.545421    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:10.819981  527485 command_runner.go:124] > May 26 21:23:33 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:33.645867    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:10.819994  527485 command_runner.go:124] > May 26 21:23:33 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:33.746343    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:10.820006  527485 command_runner.go:124] > May 26 21:23:33 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:33.846868    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:10.820019  527485 command_runner.go:124] > May 26 21:23:33 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:33.947104    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:10.820031  527485 command_runner.go:124] > May 26 21:23:34 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:34.047842    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:10.820043  527485 command_runner.go:124] > May 26 21:23:34 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:34.148334    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:10.820055  527485 command_runner.go:124] > May 26 21:23:34 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:34.248550    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:10.820069  527485 command_runner.go:124] > May 26 21:23:34 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:34.349232    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:10.820083  527485 command_runner.go:124] > May 26 21:23:34 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:34.449632    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:10.820098  527485 command_runner.go:124] > May 26 21:23:34 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:34.549987    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:10.820111  527485 command_runner.go:124] > May 26 21:23:34 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:34.650314    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:10.820123  527485 command_runner.go:124] > May 26 21:23:34 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:34.751182    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:10.820133  527485 command_runner.go:124] > May 26 21:23:34 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:34.832693    2343 reconciler.go:157] Reconciler: start to sync state
	I0526 21:25:10.820150  527485 command_runner.go:124] > May 26 21:23:34 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:34.841269    2343 nodelease.go:49] failed to get node "multinode-20210526212238-510955" when trying to set owner ref to the node lease: nodes "multinode-20210526212238-510955" not found
	I0526 21:25:10.820162  527485 command_runner.go:124] > May 26 21:23:34 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:34.851652    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:10.820176  527485 command_runner.go:124] > May 26 21:23:34 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:34.952325    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:10.820189  527485 command_runner.go:124] > May 26 21:23:35 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:35.015600    2343 kubelet_node_status.go:74] Successfully registered node multinode-20210526212238-510955
	I0526 21:25:10.820206  527485 command_runner.go:124] > May 26 21:23:38 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:38.003372    2343 kubelet.go:2163] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	I0526 21:25:10.820221  527485 command_runner.go:124] > May 26 21:23:38 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:38.252332    2343 dynamic_cafile_content.go:182] Shutting down client-ca-bundle::/var/lib/minikube/certs/ca.crt
	I0526 21:25:10.820233  527485 command_runner.go:124] > May 26 21:23:38 multinode-20210526212238-510955 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	I0526 21:25:10.820242  527485 command_runner.go:124] > May 26 21:23:38 multinode-20210526212238-510955 systemd[1]: kubelet.service: Succeeded.
	I0526 21:25:10.820251  527485 command_runner.go:124] > May 26 21:23:38 multinode-20210526212238-510955 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0526 21:25:10.820260  527485 command_runner.go:124] > May 26 21:23:38 multinode-20210526212238-510955 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0526 21:25:10.820281  527485 command_runner.go:124] > May 26 21:23:38 multinode-20210526212238-510955 kubelet[2767]: Flag --runtime-request-timeout has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	I0526 21:25:10.820301  527485 command_runner.go:124] > May 26 21:23:38 multinode-20210526212238-510955 kubelet[2767]: Flag --runtime-request-timeout has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	I0526 21:25:10.820312  527485 command_runner.go:124] > May 26 21:23:38 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:38.567074    2767 server.go:416] Version: v1.20.2
	I0526 21:25:10.820325  527485 command_runner.go:124] > May 26 21:23:38 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:38.569090    2767 server.go:837] Client rotation is on, will bootstrap in background
	I0526 21:25:10.820337  527485 command_runner.go:124] > May 26 21:23:38 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:38.580189    2767 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	I0526 21:25:10.820348  527485 command_runner.go:124] > May 26 21:23:38 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:38.581836    2767 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	I0526 21:25:10.820360  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.594567    2767 server.go:645] --cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /
	I0526 21:25:10.820371  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.596007    2767 container_manager_linux.go:274] container manager verified user specified cgroup-root exists: []
	I0526 21:25:10.820410  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.596173    2767 container_manager_linux.go:279] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:remote CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalTopologyManagerScope:container ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none}
	I0526 21:25:10.820424  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.596418    2767 topology_manager.go:120] [topologymanager] Creating topology manager with none policy per container scope
	I0526 21:25:10.820436  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.596689    2767 container_manager_linux.go:310] [topologymanager] Initializing Topology Manager with none policy and container-level scope
	I0526 21:25:10.820447  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.596801    2767 container_manager_linux.go:315] Creating device plugin manager: true
	I0526 21:25:10.820456  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.597107    2767 remote_runtime.go:62] parsed scheme: ""
	I0526 21:25:10.820467  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.597233    2767 remote_runtime.go:62] scheme "" not registered, fallback to default scheme
	I0526 21:25:10.820480  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.597387    2767 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}
	I0526 21:25:10.820490  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.597579    2767 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0526 21:25:10.820500  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.597846    2767 remote_image.go:50] parsed scheme: ""
	I0526 21:25:10.820510  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.597965    2767 remote_image.go:50] scheme "" not registered, fallback to default scheme
	I0526 21:25:10.820526  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.598781    2767 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}
	I0526 21:25:10.820537  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.598958    2767 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0526 21:25:10.820547  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.599605    2767 kubelet.go:262] Adding pod path: /etc/kubernetes/manifests
	I0526 21:25:10.820556  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.599963    2767 kubelet.go:273] Watching apiserver
	I0526 21:25:10.820568  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.629159    2767 kuberuntime_manager.go:216] Container runtime containerd initialized, version: v1.4.4, apiVersion: v1alpha2
	I0526 21:25:10.820581  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: E0526 21:23:43.914429    2767 aws_credentials.go:77] while getting AWS credentials NoCredentialProviders: no valid providers in chain. Deprecated.
	I0526 21:25:10.820590  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]:         For verbose messaging see aws.Config.CredentialsChainVerboseErrors
	I0526 21:25:10.820600  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.918059    2767 server.go:1176] Started kubelet
	I0526 21:25:10.820610  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.928363    2767 server.go:148] Starting to listen on 0.0.0.0:10250
	I0526 21:25:10.820620  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.931699    2767 server.go:410] Adding debug handlers to kubelet server.
	I0526 21:25:10.820629  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.943931    2767 fs_resource_analyzer.go:64] Starting FS ResourceAnalyzer
	I0526 21:25:10.820639  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.945256    2767 volume_manager.go:271] Starting Kubelet Volume Manager
	I0526 21:25:10.820654  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: E0526 21:23:43.949736    2767 kubelet.go:2163] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	I0526 21:25:10.820666  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.949953    2767 client.go:86] parsed scheme: "unix"
	I0526 21:25:10.820682  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.950079    2767 client.go:86] scheme "unix" not registered, fallback to default scheme
	I0526 21:25:10.820701  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.950244    2767 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}
	I0526 21:25:10.820716  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.950360    2767 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0526 21:25:10.820733  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.960536    2767 desired_state_of_world_populator.go:142] Desired state populator starts to run
	I0526 21:25:10.820750  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.047200    2767 kubelet_node_status.go:71] Attempting to register node multinode-20210526212238-510955
	I0526 21:25:10.820770  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.063101    2767 kubelet_node_status.go:109] Node multinode-20210526212238-510955 was previously registered
	I0526 21:25:10.820781  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.063585    2767 kubelet_node_status.go:74] Successfully registered node multinode-20210526212238-510955
	I0526 21:25:10.820791  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.100820    2767 kubelet_network_linux.go:56] Initialized IPv4 iptables rules.
	I0526 21:25:10.820803  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.100987    2767 status_manager.go:158] Starting to sync pod status with apiserver
	I0526 21:25:10.820813  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.101019    2767 kubelet.go:1802] Starting kubelet main sync loop.
	I0526 21:25:10.820828  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: E0526 21:23:44.101062    2767 kubelet.go:1826] skipping pod synchronization - [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
	I0526 21:25:10.820838  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.167420    2767 cpu_manager.go:193] [cpumanager] starting with none policy
	I0526 21:25:10.820848  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.167823    2767 cpu_manager.go:194] [cpumanager] reconciling every 10s
	I0526 21:25:10.820858  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.167963    2767 state_mem.go:36] [cpumanager] initializing new in-memory state store
	I0526 21:25:10.820876  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.168200    2767 state_mem.go:88] [cpumanager] updated default cpuset: ""
	I0526 21:25:10.820888  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.168317    2767 state_mem.go:96] [cpumanager] updated cpuset assignments: "map[]"
	I0526 21:25:10.820897  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.168438    2767 policy_none.go:43] [cpumanager] none policy: Start
	I0526 21:25:10.820909  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: W0526 21:23:44.170589    2767 manager.go:594] Failed to retrieve checkpoint for "kubelet_internal_checkpoint": checkpoint is not found
	I0526 21:25:10.820919  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.170973    2767 plugin_manager.go:114] Starting Kubelet Plugin Manager
	I0526 21:25:10.820929  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.201167    2767 topology_manager.go:187] [topologymanager] Topology Admit Handler
	I0526 21:25:10.820939  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.201423    2767 topology_manager.go:187] [topologymanager] Topology Admit Handler
	I0526 21:25:10.820949  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.202839    2767 topology_manager.go:187] [topologymanager] Topology Admit Handler
	I0526 21:25:10.820961  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.202968    2767 topology_manager.go:187] [topologymanager] Topology Admit Handler
	I0526 21:25:10.820983  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.349811    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/474c55dfb64741cc485e46b6bb9f2dc0-kubeconfig") pod "kube-controller-manager-multinode-20210526212238-510955" (UID: "474c55dfb64741cc485e46b6bb9f2dc0")
	I0526 21:25:10.821004  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.349855    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/474c55dfb64741cc485e46b6bb9f2dc0-usr-share-ca-certificates") pod "kube-controller-manager-multinode-20210526212238-510955" (UID: "474c55dfb64741cc485e46b6bb9f2dc0")
	I0526 21:25:10.821023  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.349894    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-certs" (UniqueName: "kubernetes.io/host-path/34530b4d5ce1b17919f3b8976b2d0456-etcd-certs") pod "etcd-multinode-20210526212238-510955" (UID: "34530b4d5ce1b17919f3b8976b2d0456")
	I0526 21:25:10.821043  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.349913    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/b42b6879229f245abab6047de8662a2f-ca-certs") pod "kube-apiserver-multinode-20210526212238-510955" (UID: "b42b6879229f245abab6047de8662a2f")
	I0526 21:25:10.821065  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.349921    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "flexvolume-dir" (UniqueName: "kubernetes.io/host-path/474c55dfb64741cc485e46b6bb9f2dc0-flexvolume-dir") pod "kube-controller-manager-multinode-20210526212238-510955" (UID: "474c55dfb64741cc485e46b6bb9f2dc0")
	I0526 21:25:10.821085  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.349921    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/474c55dfb64741cc485e46b6bb9f2dc0-ca-certs") pod "kube-controller-manager-multinode-20210526212238-510955" (UID: "474c55dfb64741cc485e46b6bb9f2dc0")
	I0526 21:25:10.821105  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.349955    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/474c55dfb64741cc485e46b6bb9f2dc0-k8s-certs") pod "kube-controller-manager-multinode-20210526212238-510955" (UID: "474c55dfb64741cc485e46b6bb9f2dc0")
	I0526 21:25:10.821126  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.349955    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/6b4a0ee8b3d15a1c2e47c15d32e6eb0d-kubeconfig") pod "kube-scheduler-multinode-20210526212238-510955" (UID: "6b4a0ee8b3d15a1c2e47c15d32e6eb0d")
	I0526 21:25:10.821144  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.349988    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-data" (UniqueName: "kubernetes.io/host-path/34530b4d5ce1b17919f3b8976b2d0456-etcd-data") pod "etcd-multinode-20210526212238-510955" (UID: "34530b4d5ce1b17919f3b8976b2d0456")
	I0526 21:25:10.821165  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.350013    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/b42b6879229f245abab6047de8662a2f-k8s-certs") pod "kube-apiserver-multinode-20210526212238-510955" (UID: "b42b6879229f245abab6047de8662a2f")
	I0526 21:25:10.821188  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.350027    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/b42b6879229f245abab6047de8662a2f-usr-share-ca-certificates") pod "kube-apiserver-multinode-20210526212238-510955" (UID: "b42b6879229f245abab6047de8662a2f")
	I0526 21:25:10.821198  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.350035    2767 reconciler.go:157] Reconciler: start to sync state
	I0526 21:25:10.821213  527485 command_runner.go:124] > May 26 21:23:49 multinode-20210526212238-510955 kubelet[2767]: E0526 21:23:49.171719    2767 kubelet.go:2163] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	I0526 21:25:10.821224  527485 command_runner.go:124] > May 26 21:23:53 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:53.286184    2767 kuberuntime_manager.go:1006] updating runtime config through cri with podcidr 10.244.0.0/24
	I0526 21:25:10.821234  527485 command_runner.go:124] > May 26 21:23:53 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:53.292064    2767 kubelet_network.go:77] Setting Pod CIDR:  -> 10.244.0.0/24
	I0526 21:25:10.821249  527485 command_runner.go:124] > May 26 21:23:53 multinode-20210526212238-510955 kubelet[2767]: E0526 21:23:53.297677    2767 kubelet.go:2163] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	I0526 21:25:10.821260  527485 command_runner.go:124] > May 26 21:23:53 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:53.473000    2767 topology_manager.go:187] [topologymanager] Topology Admit Handler
	I0526 21:25:10.821282  527485 command_runner.go:124] > May 26 21:23:53 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:53.588715    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "cni-cfg" (UniqueName: "kubernetes.io/host-path/aac3ff91-8f9c-4f4e-81fc-a859f780d67d-cni-cfg") pod "kindnet-2wgbs" (UID: "aac3ff91-8f9c-4f4e-81fc-a859f780d67d")
	I0526 21:25:10.821302  527485 command_runner.go:124] > May 26 21:23:53 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:53.589055    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/aac3ff91-8f9c-4f4e-81fc-a859f780d67d-xtables-lock") pod "kindnet-2wgbs" (UID: "aac3ff91-8f9c-4f4e-81fc-a859f780d67d")
	I0526 21:25:10.821323  527485 command_runner.go:124] > May 26 21:23:53 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:53.589618    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kindnet-token-zm2kt" (UniqueName: "kubernetes.io/secret/aac3ff91-8f9c-4f4e-81fc-a859f780d67d-kindnet-token-zm2kt") pod "kindnet-2wgbs" (UID: "aac3ff91-8f9c-4f4e-81fc-a859f780d67d")
	I0526 21:25:10.821342  527485 command_runner.go:124] > May 26 21:23:53 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:53.589842    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/aac3ff91-8f9c-4f4e-81fc-a859f780d67d-lib-modules") pod "kindnet-2wgbs" (UID: "aac3ff91-8f9c-4f4e-81fc-a859f780d67d")
	I0526 21:25:10.821354  527485 command_runner.go:124] > May 26 21:23:53 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:53.611915    2767 topology_manager.go:187] [topologymanager] Topology Admit Handler
	I0526 21:25:10.821374  527485 command_runner.go:124] > May 26 21:23:53 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:53.791552    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/950a915d-c5f0-4e6f-bc12-ee97013032f0-kube-proxy") pod "kube-proxy-qbl42" (UID: "950a915d-c5f0-4e6f-bc12-ee97013032f0")
	I0526 21:25:10.821396  527485 command_runner.go:124] > May 26 21:23:53 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:53.791755    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/950a915d-c5f0-4e6f-bc12-ee97013032f0-lib-modules") pod "kube-proxy-qbl42" (UID: "950a915d-c5f0-4e6f-bc12-ee97013032f0")
	I0526 21:25:10.821416  527485 command_runner.go:124] > May 26 21:23:53 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:53.791904    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy-token-xd4p4" (UniqueName: "kubernetes.io/secret/950a915d-c5f0-4e6f-bc12-ee97013032f0-kube-proxy-token-xd4p4") pod "kube-proxy-qbl42" (UID: "950a915d-c5f0-4e6f-bc12-ee97013032f0")
	I0526 21:25:10.821436  527485 command_runner.go:124] > May 26 21:23:53 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:53.792035    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/950a915d-c5f0-4e6f-bc12-ee97013032f0-xtables-lock") pod "kube-proxy-qbl42" (UID: "950a915d-c5f0-4e6f-bc12-ee97013032f0")
	I0526 21:25:10.821452  527485 command_runner.go:124] > May 26 21:23:54 multinode-20210526212238-510955 kubelet[2767]: E0526 21:23:54.172944    2767 kubelet.go:2163] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	I0526 21:25:10.821472  527485 command_runner.go:124] > May 26 21:23:56 multinode-20210526212238-510955 kubelet[2767]: E0526 21:23:56.623072    2767 cadvisor_stats_provider.go:401] Partial failure issuing cadvisor.ContainerInfoV2: partial failures: ["/kubepods/besteffort/pod950a915d-c5f0-4e6f-bc12-ee97013032f0/de6efc6fec4b20903efb849817a9e57e284841a9fc6ef9bd1b83e563316cdaa2": RecentStats: unable to find data in memory cache]
	I0526 21:25:10.821483  527485 command_runner.go:124] > May 26 21:24:08 multinode-20210526212238-510955 kubelet[2767]: I0526 21:24:08.993599    2767 topology_manager.go:187] [topologymanager] Topology Admit Handler
	I0526 21:25:10.821493  527485 command_runner.go:124] > May 26 21:24:09 multinode-20210526212238-510955 kubelet[2767]: I0526 21:24:09.010021    2767 topology_manager.go:187] [topologymanager] Topology Admit Handler
	I0526 21:25:10.821521  527485 command_runner.go:124] > May 26 21:24:09 multinode-20210526212238-510955 kubelet[2767]: I0526 21:24:09.159693    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp" (UniqueName: "kubernetes.io/host-path/e29f2487-1ac5-4bb4-9ab8-6b5d5c43ec36-tmp") pod "storage-provisioner" (UID: "e29f2487-1ac5-4bb4-9ab8-6b5d5c43ec36")
	I0526 21:25:10.821545  527485 command_runner.go:124] > May 26 21:24:09 multinode-20210526212238-510955 kubelet[2767]: I0526 21:24:09.159808    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-7ps8h" (UniqueName: "kubernetes.io/secret/a0522c32-9960-4c21-8a5a-d0b137009166-coredns-token-7ps8h") pod "coredns-74ff55c5b-tw67b" (UID: "a0522c32-9960-4c21-8a5a-d0b137009166")
	I0526 21:25:10.821568  527485 command_runner.go:124] > May 26 21:24:09 multinode-20210526212238-510955 kubelet[2767]: I0526 21:24:09.159830    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/a0522c32-9960-4c21-8a5a-d0b137009166-config-volume") pod "coredns-74ff55c5b-tw67b" (UID: "a0522c32-9960-4c21-8a5a-d0b137009166")
	I0526 21:25:10.821592  527485 command_runner.go:124] > May 26 21:24:09 multinode-20210526212238-510955 kubelet[2767]: I0526 21:24:09.159848    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "storage-provisioner-token-hgxxq" (UniqueName: "kubernetes.io/secret/e29f2487-1ac5-4bb4-9ab8-6b5d5c43ec36-storage-provisioner-token-hgxxq") pod "storage-provisioner" (UID: "e29f2487-1ac5-4bb4-9ab8-6b5d5c43ec36")
	I0526 21:25:10.852060  527485 logs.go:123] Gathering logs for describe nodes ...
	I0526 21:25:10.852084  527485 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0526 21:25:11.005822  527485 command_runner.go:124] > Name:               multinode-20210526212238-510955
	I0526 21:25:11.005848  527485 command_runner.go:124] > Roles:              control-plane,master
	I0526 21:25:11.005856  527485 command_runner.go:124] > Labels:             beta.kubernetes.io/arch=amd64
	I0526 21:25:11.005861  527485 command_runner.go:124] >                     beta.kubernetes.io/os=linux
	I0526 21:25:11.005867  527485 command_runner.go:124] >                     kubernetes.io/arch=amd64
	I0526 21:25:11.005877  527485 command_runner.go:124] >                     kubernetes.io/hostname=multinode-20210526212238-510955
	I0526 21:25:11.005882  527485 command_runner.go:124] >                     kubernetes.io/os=linux
	I0526 21:25:11.005890  527485 command_runner.go:124] >                     minikube.k8s.io/commit=1440f8d7119ca73787e7dc88324b0d13449454ff
	I0526 21:25:11.005898  527485 command_runner.go:124] >                     minikube.k8s.io/name=multinode-20210526212238-510955
	I0526 21:25:11.005906  527485 command_runner.go:124] >                     minikube.k8s.io/updated_at=2021_05_26T21_23_38_0700
	I0526 21:25:11.005915  527485 command_runner.go:124] >                     minikube.k8s.io/version=v1.20.0
	I0526 21:25:11.005921  527485 command_runner.go:124] >                     node-role.kubernetes.io/control-plane=
	I0526 21:25:11.005927  527485 command_runner.go:124] >                     node-role.kubernetes.io/master=
	I0526 21:25:11.005937  527485 command_runner.go:124] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	I0526 21:25:11.005948  527485 command_runner.go:124] >                     node.alpha.kubernetes.io/ttl: 0
	I0526 21:25:11.005960  527485 command_runner.go:124] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0526 21:25:11.005971  527485 command_runner.go:124] > CreationTimestamp:  Wed, 26 May 2021 21:23:34 +0000
	I0526 21:25:11.005992  527485 command_runner.go:124] > Taints:             <none>
	I0526 21:25:11.006002  527485 command_runner.go:124] > Unschedulable:      false
	I0526 21:25:11.006007  527485 command_runner.go:124] > Lease:
	I0526 21:25:11.006017  527485 command_runner.go:124] >   HolderIdentity:  multinode-20210526212238-510955
	I0526 21:25:11.006027  527485 command_runner.go:124] >   AcquireTime:     <unset>
	I0526 21:25:11.006036  527485 command_runner.go:124] >   RenewTime:       Wed, 26 May 2021 21:25:04 +0000
	I0526 21:25:11.006041  527485 command_runner.go:124] > Conditions:
	I0526 21:25:11.006056  527485 command_runner.go:124] >   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	I0526 21:25:11.006074  527485 command_runner.go:124] >   ----             ------  -----------------                 ------------------                ------                       -------
	I0526 21:25:11.006095  527485 command_runner.go:124] >   MemoryPressure   False   Wed, 26 May 2021 21:24:14 +0000   Wed, 26 May 2021 21:23:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	I0526 21:25:11.006138  527485 command_runner.go:124] >   DiskPressure     False   Wed, 26 May 2021 21:24:14 +0000   Wed, 26 May 2021 21:23:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	I0526 21:25:11.006157  527485 command_runner.go:124] >   PIDPressure      False   Wed, 26 May 2021 21:24:14 +0000   Wed, 26 May 2021 21:23:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	I0526 21:25:11.006184  527485 command_runner.go:124] >   Ready            True    Wed, 26 May 2021 21:24:14 +0000   Wed, 26 May 2021 21:24:04 +0000   KubeletReady                 kubelet is posting ready status
	I0526 21:25:11.006193  527485 command_runner.go:124] > Addresses:
	I0526 21:25:11.006200  527485 command_runner.go:124] >   InternalIP:  192.168.39.229
	I0526 21:25:11.006208  527485 command_runner.go:124] >   Hostname:    multinode-20210526212238-510955
	I0526 21:25:11.006212  527485 command_runner.go:124] > Capacity:
	I0526 21:25:11.006220  527485 command_runner.go:124] >   cpu:                2
	I0526 21:25:11.006225  527485 command_runner.go:124] >   ephemeral-storage:  17784752Ki
	I0526 21:25:11.006232  527485 command_runner.go:124] >   hugepages-2Mi:      0
	I0526 21:25:11.006236  527485 command_runner.go:124] >   memory:             2186320Ki
	I0526 21:25:11.006242  527485 command_runner.go:124] >   pods:               110
	I0526 21:25:11.006246  527485 command_runner.go:124] > Allocatable:
	I0526 21:25:11.006252  527485 command_runner.go:124] >   cpu:                2
	I0526 21:25:11.006257  527485 command_runner.go:124] >   ephemeral-storage:  17784752Ki
	I0526 21:25:11.006261  527485 command_runner.go:124] >   hugepages-2Mi:      0
	I0526 21:25:11.006266  527485 command_runner.go:124] >   memory:             2186320Ki
	I0526 21:25:11.006272  527485 command_runner.go:124] >   pods:               110
	I0526 21:25:11.006276  527485 command_runner.go:124] > System Info:
	I0526 21:25:11.006282  527485 command_runner.go:124] >   Machine ID:                 fbd77f9e2b0d4ce7860fb21881bb7ff3
	I0526 21:25:11.006288  527485 command_runner.go:124] >   System UUID:                fbd77f9e-2b0d-4ce7-860f-b21881bb7ff3
	I0526 21:25:11.006295  527485 command_runner.go:124] >   Boot ID:                    9a60591c-de07-4474-bb32-101b0a9643ff
	I0526 21:25:11.006300  527485 command_runner.go:124] >   Kernel Version:             4.19.182
	I0526 21:25:11.006313  527485 command_runner.go:124] >   OS Image:                   Buildroot 2020.02.12
	I0526 21:25:11.006317  527485 command_runner.go:124] >   Operating System:           linux
	I0526 21:25:11.006323  527485 command_runner.go:124] >   Architecture:               amd64
	I0526 21:25:11.006328  527485 command_runner.go:124] >   Container Runtime Version:  containerd://1.4.4
	I0526 21:25:11.006334  527485 command_runner.go:124] >   Kubelet Version:            v1.20.2
	I0526 21:25:11.006339  527485 command_runner.go:124] >   Kube-Proxy Version:         v1.20.2
	I0526 21:25:11.006345  527485 command_runner.go:124] > PodCIDR:                      10.244.0.0/24
	I0526 21:25:11.006350  527485 command_runner.go:124] > PodCIDRs:                     10.244.0.0/24
	I0526 21:25:11.006357  527485 command_runner.go:124] > Non-terminated Pods:          (8 in total)
	I0526 21:25:11.006368  527485 command_runner.go:124] >   Namespace                   Name                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	I0526 21:25:11.006382  527485 command_runner.go:124] >   ---------                   ----                                                       ------------  ----------  ---------------  -------------  ---
	I0526 21:25:11.006395  527485 command_runner.go:124] >   kube-system                 coredns-74ff55c5b-tw67b                                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (7%!)(MISSING)     78s
	I0526 21:25:11.006407  527485 command_runner.go:124] >   kube-system                 etcd-multinode-20210526212238-510955                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         87s
	I0526 21:25:11.006419  527485 command_runner.go:124] >   kube-system                 kindnet-2wgbs                                              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      78s
	I0526 21:25:11.006430  527485 command_runner.go:124] >   kube-system                 kube-apiserver-multinode-20210526212238-510955             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         87s
	I0526 21:25:11.006451  527485 command_runner.go:124] >   kube-system                 kube-controller-manager-multinode-20210526212238-510955    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         87s
	I0526 21:25:11.006465  527485 command_runner.go:124] >   kube-system                 kube-proxy-qbl42                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         78s
	I0526 21:25:11.006478  527485 command_runner.go:124] >   kube-system                 kube-scheduler-multinode-20210526212238-510955             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         87s
	I0526 21:25:11.006490  527485 command_runner.go:124] >   kube-system                 storage-provisioner                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         76s
	I0526 21:25:11.006496  527485 command_runner.go:124] > Allocated resources:
	I0526 21:25:11.006501  527485 command_runner.go:124] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0526 21:25:11.006509  527485 command_runner.go:124] >   Resource           Requests     Limits
	I0526 21:25:11.006514  527485 command_runner.go:124] >   --------           --------     ------
	I0526 21:25:11.006521  527485 command_runner.go:124] >   cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	I0526 21:25:11.006526  527485 command_runner.go:124] >   memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	I0526 21:25:11.006533  527485 command_runner.go:124] >   ephemeral-storage  100Mi (0%!)(MISSING)   0 (0%!)(MISSING)
	I0526 21:25:11.006539  527485 command_runner.go:124] >   hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	I0526 21:25:11.006545  527485 command_runner.go:124] > Events:
	I0526 21:25:11.006552  527485 command_runner.go:124] >   Type    Reason                   Age                  From        Message
	I0526 21:25:11.006561  527485 command_runner.go:124] >   ----    ------                   ----                 ----        -------
	I0526 21:25:11.006568  527485 command_runner.go:124] >   Normal  Starting                 104s                 kubelet     Starting kubelet.
	I0526 21:25:11.006580  527485 command_runner.go:124] >   Normal  NodeHasSufficientMemory  103s (x4 over 104s)  kubelet     Node multinode-20210526212238-510955 status is now: NodeHasSufficientMemory
	I0526 21:25:11.006592  527485 command_runner.go:124] >   Normal  NodeHasNoDiskPressure    103s (x3 over 104s)  kubelet     Node multinode-20210526212238-510955 status is now: NodeHasNoDiskPressure
	I0526 21:25:11.006603  527485 command_runner.go:124] >   Normal  NodeHasSufficientPID     103s (x3 over 104s)  kubelet     Node multinode-20210526212238-510955 status is now: NodeHasSufficientPID
	I0526 21:25:11.006615  527485 command_runner.go:124] >   Normal  NodeAllocatableEnforced  103s                 kubelet     Updated Node Allocatable limit across pods
	I0526 21:25:11.006622  527485 command_runner.go:124] >   Normal  Starting                 88s                  kubelet     Starting kubelet.
	I0526 21:25:11.006634  527485 command_runner.go:124] >   Normal  NodeHasSufficientMemory  87s                  kubelet     Node multinode-20210526212238-510955 status is now: NodeHasSufficientMemory
	I0526 21:25:11.006644  527485 command_runner.go:124] >   Normal  NodeHasNoDiskPressure    87s                  kubelet     Node multinode-20210526212238-510955 status is now: NodeHasNoDiskPressure
	I0526 21:25:11.006655  527485 command_runner.go:124] >   Normal  NodeHasSufficientPID     87s                  kubelet     Node multinode-20210526212238-510955 status is now: NodeHasSufficientPID
	I0526 21:25:11.006670  527485 command_runner.go:124] >   Normal  NodeAllocatableEnforced  87s                  kubelet     Updated Node Allocatable limit across pods
	I0526 21:25:11.006684  527485 command_runner.go:124] >   Normal  Starting                 77s                  kube-proxy  Starting kube-proxy.
	I0526 21:25:11.006702  527485 command_runner.go:124] >   Normal  NodeReady                67s                  kubelet     Node multinode-20210526212238-510955 status is now: NodeReady
	I0526 21:25:11.009708  527485 logs.go:123] Gathering logs for kube-controller-manager [2314e41b1b44395e7b5a7c32d6859941d4a5db601171f0dd18ddefb571d25f18] ...
	I0526 21:25:11.009735  527485 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 2314e41b1b44395e7b5a7c32d6859941d4a5db601171f0dd18ddefb571d25f18"
	I0526 21:25:11.032379  527485 command_runner.go:124] ! Flag --port has been deprecated, see --secure-port instead.
	I0526 21:25:11.032406  527485 command_runner.go:124] ! I0526 21:23:30.770698       1 serving.go:331] Generated self-signed cert in-memory
	I0526 21:25:11.032417  527485 command_runner.go:124] ! I0526 21:23:31.105740       1 controllermanager.go:176] Version: v1.20.2
	I0526 21:25:11.032433  527485 command_runner.go:124] ! I0526 21:23:31.110528       1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
	I0526 21:25:11.032447  527485 command_runner.go:124] ! I0526 21:23:31.110685       1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	I0526 21:25:11.032466  527485 command_runner.go:124] ! I0526 21:23:31.111406       1 secure_serving.go:197] Serving securely on 127.0.0.1:10257
	I0526 21:25:11.032479  527485 command_runner.go:124] ! I0526 21:23:31.111685       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I0526 21:25:11.032495  527485 command_runner.go:124] ! I0526 21:23:37.283320       1 shared_informer.go:240] Waiting for caches to sync for tokens
	I0526 21:25:11.032508  527485 command_runner.go:124] ! I0526 21:23:37.384858       1 shared_informer.go:247] Caches are synced for tokens 
	I0526 21:25:11.032520  527485 command_runner.go:124] ! I0526 21:23:37.398260       1 controllermanager.go:554] Started "csrcleaner"
	I0526 21:25:11.032537  527485 command_runner.go:124] ! I0526 21:23:37.398681       1 cleaner.go:82] Starting CSR cleaner controller
	I0526 21:25:11.032562  527485 command_runner.go:124] ! I0526 21:23:37.436326       1 controllermanager.go:554] Started "tokencleaner"
	I0526 21:25:11.032577  527485 command_runner.go:124] ! I0526 21:23:37.436948       1 tokencleaner.go:118] Starting token cleaner controller
	I0526 21:25:11.032593  527485 command_runner.go:124] ! I0526 21:23:37.437051       1 shared_informer.go:240] Waiting for caches to sync for token_cleaner
	I0526 21:25:11.032609  527485 command_runner.go:124] ! I0526 21:23:37.437060       1 shared_informer.go:247] Caches are synced for token_cleaner 
	I0526 21:25:11.032628  527485 command_runner.go:124] ! E0526 21:23:37.458692       1 core.go:92] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail
	I0526 21:25:11.032646  527485 command_runner.go:124] ! W0526 21:23:37.458788       1 controllermanager.go:546] Skipping "service"
	I0526 21:25:11.032662  527485 command_runner.go:124] ! I0526 21:23:37.485897       1 controllermanager.go:554] Started "root-ca-cert-publisher"
	I0526 21:25:11.032676  527485 command_runner.go:124] ! W0526 21:23:37.486148       1 controllermanager.go:546] Skipping "ephemeral-volume"
	I0526 21:25:11.032690  527485 command_runner.go:124] ! I0526 21:23:37.486971       1 publisher.go:98] Starting root CA certificate configmap publisher
	I0526 21:25:11.032704  527485 command_runner.go:124] ! I0526 21:23:37.487325       1 shared_informer.go:240] Waiting for caches to sync for crt configmap
	I0526 21:25:11.032718  527485 command_runner.go:124] ! I0526 21:23:37.514186       1 controllermanager.go:554] Started "endpointslicemirroring"
	I0526 21:25:11.032737  527485 command_runner.go:124] ! I0526 21:23:37.515190       1 endpointslicemirroring_controller.go:211] Starting EndpointSliceMirroring controller
	I0526 21:25:11.032751  527485 command_runner.go:124] ! I0526 21:23:37.515570       1 shared_informer.go:240] Waiting for caches to sync for endpoint_slice_mirroring
	I0526 21:25:11.032772  527485 command_runner.go:124] ! I0526 21:23:37.550580       1 controllermanager.go:554] Started "replicaset"
	I0526 21:25:11.032787  527485 command_runner.go:124] ! I0526 21:23:37.551218       1 replica_set.go:182] Starting replicaset controller
	I0526 21:25:11.032807  527485 command_runner.go:124] ! I0526 21:23:37.551414       1 shared_informer.go:240] Waiting for caches to sync for ReplicaSet
	I0526 21:25:11.032822  527485 command_runner.go:124] ! I0526 21:23:37.987267       1 controllermanager.go:554] Started "horizontalpodautoscaling"
	I0526 21:25:11.032836  527485 command_runner.go:124] ! I0526 21:23:37.988181       1 horizontal.go:169] Starting HPA controller
	I0526 21:25:11.032849  527485 command_runner.go:124] ! I0526 21:23:37.988418       1 shared_informer.go:240] Waiting for caches to sync for HPA
	I0526 21:25:11.032880  527485 command_runner.go:124] ! I0526 21:23:38.238507       1 controllermanager.go:554] Started "persistentvolume-binder"
	I0526 21:25:11.032894  527485 command_runner.go:124] ! I0526 21:23:38.238941       1 pv_controller_base.go:307] Starting persistent volume controller
	I0526 21:25:11.032911  527485 command_runner.go:124] ! I0526 21:23:38.238953       1 shared_informer.go:240] Waiting for caches to sync for persistent volume
	I0526 21:25:11.032926  527485 command_runner.go:124] ! I0526 21:23:38.636899       1 controllermanager.go:554] Started "garbagecollector"
	I0526 21:25:11.032942  527485 command_runner.go:124] ! I0526 21:23:38.636902       1 garbagecollector.go:142] Starting garbage collector controller
	I0526 21:25:11.032957  527485 command_runner.go:124] ! I0526 21:23:38.636960       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	I0526 21:25:11.032970  527485 command_runner.go:124] ! I0526 21:23:38.637525       1 graph_builder.go:289] GraphBuilder running
	I0526 21:25:11.032982  527485 command_runner.go:124] ! I0526 21:23:39.037283       1 controllermanager.go:554] Started "disruption"
	I0526 21:25:11.032997  527485 command_runner.go:124] ! I0526 21:23:39.037574       1 disruption.go:331] Starting disruption controller
	I0526 21:25:11.033012  527485 command_runner.go:124] ! I0526 21:23:39.037585       1 shared_informer.go:240] Waiting for caches to sync for disruption
	I0526 21:25:11.033028  527485 command_runner.go:124] ! I0526 21:23:39.286540       1 controllermanager.go:554] Started "clusterrole-aggregation"
	I0526 21:25:11.033044  527485 command_runner.go:124] ! I0526 21:23:39.286598       1 clusterroleaggregation_controller.go:149] Starting ClusterRoleAggregator
	I0526 21:25:11.033057  527485 command_runner.go:124] ! I0526 21:23:39.286605       1 shared_informer.go:240] Waiting for caches to sync for ClusterRoleAggregator
	I0526 21:25:11.033069  527485 command_runner.go:124] ! I0526 21:23:39.537304       1 controllermanager.go:554] Started "pvc-protection"
	I0526 21:25:11.033089  527485 command_runner.go:124] ! I0526 21:23:39.537579       1 pvc_protection_controller.go:110] Starting PVC protection controller
	I0526 21:25:11.033107  527485 command_runner.go:124] ! I0526 21:23:39.537670       1 shared_informer.go:240] Waiting for caches to sync for PVC protection
	I0526 21:25:11.033121  527485 command_runner.go:124] ! I0526 21:23:39.786982       1 controllermanager.go:554] Started "pv-protection"
	I0526 21:25:11.033135  527485 command_runner.go:124] ! I0526 21:23:39.787110       1 pv_protection_controller.go:83] Starting PV protection controller
	I0526 21:25:11.033150  527485 command_runner.go:124] ! I0526 21:23:39.787118       1 shared_informer.go:240] Waiting for caches to sync for PV protection
	I0526 21:25:11.033164  527485 command_runner.go:124] ! I0526 21:23:40.036383       1 controllermanager.go:554] Started "endpoint"
	I0526 21:25:11.033179  527485 command_runner.go:124] ! I0526 21:23:40.036415       1 endpoints_controller.go:184] Starting endpoint controller
	I0526 21:25:11.033195  527485 command_runner.go:124] ! I0526 21:23:40.037058       1 shared_informer.go:240] Waiting for caches to sync for endpoint
	I0526 21:25:11.033208  527485 command_runner.go:124] ! I0526 21:23:40.288607       1 controllermanager.go:554] Started "podgc"
	I0526 21:25:11.033221  527485 command_runner.go:124] ! I0526 21:23:40.288827       1 gc_controller.go:89] Starting GC controller
	I0526 21:25:11.033240  527485 command_runner.go:124] ! I0526 21:23:40.289411       1 shared_informer.go:240] Waiting for caches to sync for GC
	I0526 21:25:11.033265  527485 command_runner.go:124] ! W0526 21:23:40.988861       1 shared_informer.go:494] resyncPeriod 13h30m7.5724073s is smaller than resyncCheckPeriod 19h40m47.70464655s and the informer has already started. Changing it to 19h40m47.70464655s
	I0526 21:25:11.033283  527485 command_runner.go:124] ! I0526 21:23:40.989960       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for serviceaccounts
	I0526 21:25:11.033302  527485 command_runner.go:124] ! I0526 21:23:40.990215       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for daemonsets.apps
	I0526 21:25:11.033319  527485 command_runner.go:124] ! I0526 21:23:40.990426       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for cronjobs.batch
	I0526 21:25:11.033338  527485 command_runner.go:124] ! I0526 21:23:40.990971       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for rolebindings.rbac.authorization.k8s.io
	I0526 21:25:11.033358  527485 command_runner.go:124] ! I0526 21:23:40.991569       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for horizontalpodautoscalers.autoscaling
	I0526 21:25:11.033378  527485 command_runner.go:124] ! I0526 21:23:40.991963       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for poddisruptionbudgets.policy
	I0526 21:25:11.033396  527485 command_runner.go:124] ! I0526 21:23:40.992141       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for jobs.batch
	I0526 21:25:11.033415  527485 command_runner.go:124] ! I0526 21:23:40.992301       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for endpointslices.discovery.k8s.io
	I0526 21:25:11.033430  527485 command_runner.go:124] ! I0526 21:23:40.992532       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for podtemplates
	I0526 21:25:11.033455  527485 command_runner.go:124] ! W0526 21:23:40.992690       1 shared_informer.go:494] resyncPeriod 13h37m25.694603534s is smaller than resyncCheckPeriod 19h40m47.70464655s and the informer has already started. Changing it to 19h40m47.70464655s
	I0526 21:25:11.033474  527485 command_runner.go:124] ! I0526 21:23:40.993075       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for controllerrevisions.apps
	I0526 21:25:11.033494  527485 command_runner.go:124] ! I0526 21:23:40.993243       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for networkpolicies.networking.k8s.io
	I0526 21:25:11.033512  527485 command_runner.go:124] ! I0526 21:23:40.993580       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for limitranges
	I0526 21:25:11.033528  527485 command_runner.go:124] ! I0526 21:23:40.993747       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for ingresses.networking.k8s.io
	I0526 21:25:11.033547  527485 command_runner.go:124] ! I0526 21:23:40.993780       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for ingresses.extensions
	I0526 21:25:11.033565  527485 command_runner.go:124] ! I0526 21:23:40.993805       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for leases.coordination.k8s.io
	I0526 21:25:11.033584  527485 command_runner.go:124] ! I0526 21:23:40.993841       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for statefulsets.apps
	I0526 21:25:11.033603  527485 command_runner.go:124] ! I0526 21:23:40.993861       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for replicasets.apps
	I0526 21:25:11.033619  527485 command_runner.go:124] ! I0526 21:23:40.993876       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for deployments.apps
	I0526 21:25:11.033636  527485 command_runner.go:124] ! I0526 21:23:40.993891       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for endpoints
	I0526 21:25:11.033657  527485 command_runner.go:124] ! I0526 21:23:40.993951       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for events.events.k8s.io
	I0526 21:25:11.033676  527485 command_runner.go:124] ! I0526 21:23:40.993980       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for roles.rbac.authorization.k8s.io
	I0526 21:25:11.033693  527485 command_runner.go:124] ! I0526 21:23:40.994082       1 controllermanager.go:554] Started "resourcequota"
	I0526 21:25:11.033717  527485 command_runner.go:124] ! I0526 21:23:40.994178       1 resource_quota_controller.go:273] Starting resource quota controller
	I0526 21:25:11.033734  527485 command_runner.go:124] ! I0526 21:23:40.994191       1 shared_informer.go:240] Waiting for caches to sync for resource quota
	I0526 21:25:11.033748  527485 command_runner.go:124] ! I0526 21:23:40.994219       1 resource_quota_monitor.go:304] QuotaMonitor running
	I0526 21:25:11.033765  527485 command_runner.go:124] ! I0526 21:23:41.028175       1 controllermanager.go:554] Started "namespace"
	I0526 21:25:11.033780  527485 command_runner.go:124] ! I0526 21:23:41.028716       1 namespace_controller.go:200] Starting namespace controller
	I0526 21:25:11.033793  527485 command_runner.go:124] ! I0526 21:23:41.028992       1 shared_informer.go:240] Waiting for caches to sync for namespace
	I0526 21:25:11.033804  527485 command_runner.go:124] ! I0526 21:23:41.051981       1 controllermanager.go:554] Started "ttl"
	I0526 21:25:11.033819  527485 command_runner.go:124] ! I0526 21:23:41.052926       1 ttl_controller.go:121] Starting TTL controller
	I0526 21:25:11.033833  527485 command_runner.go:124] ! I0526 21:23:41.053383       1 shared_informer.go:240] Waiting for caches to sync for TTL
	I0526 21:25:11.033847  527485 command_runner.go:124] ! I0526 21:23:41.289145       1 controllermanager.go:554] Started "attachdetach"
	I0526 21:25:11.033860  527485 command_runner.go:124] ! W0526 21:23:41.289246       1 controllermanager.go:546] Skipping "ttl-after-finished"
	I0526 21:25:11.033873  527485 command_runner.go:124] ! I0526 21:23:41.289282       1 attach_detach_controller.go:328] Starting attach detach controller
	I0526 21:25:11.033889  527485 command_runner.go:124] ! I0526 21:23:41.289291       1 shared_informer.go:240] Waiting for caches to sync for attach detach
	I0526 21:25:11.033904  527485 command_runner.go:124] ! I0526 21:23:41.537362       1 controllermanager.go:554] Started "serviceaccount"
	I0526 21:25:11.033918  527485 command_runner.go:124] ! I0526 21:23:41.537403       1 serviceaccounts_controller.go:117] Starting service account controller
	I0526 21:25:11.033933  527485 command_runner.go:124] ! I0526 21:23:41.538137       1 shared_informer.go:240] Waiting for caches to sync for service account
	I0526 21:25:11.033946  527485 command_runner.go:124] ! I0526 21:23:41.787243       1 controllermanager.go:554] Started "deployment"
	I0526 21:25:11.033959  527485 command_runner.go:124] ! I0526 21:23:41.788023       1 deployment_controller.go:153] Starting deployment controller
	I0526 21:25:11.033974  527485 command_runner.go:124] ! I0526 21:23:41.790417       1 shared_informer.go:240] Waiting for caches to sync for deployment
	I0526 21:25:11.033986  527485 command_runner.go:124] ! I0526 21:23:41.936235       1 controllermanager.go:554] Started "csrapproving"
	I0526 21:25:11.034003  527485 command_runner.go:124] ! I0526 21:23:41.936293       1 certificate_controller.go:118] Starting certificate controller "csrapproving"
	I0526 21:25:11.034025  527485 command_runner.go:124] ! I0526 21:23:41.936301       1 shared_informer.go:240] Waiting for caches to sync for certificate-csrapproving
	I0526 21:25:11.034044  527485 command_runner.go:124] ! I0526 21:23:42.137381       1 request.go:655] Throttling request took 1.048213324s, request: GET:https://192.168.39.229:8443/apis/extensions/v1beta1?timeout=32s
	I0526 21:25:11.034059  527485 command_runner.go:124] ! I0526 21:23:42.189224       1 node_ipam_controller.go:91] Sending events to api server.
	I0526 21:25:11.034077  527485 command_runner.go:124] ! I0526 21:23:52.210125       1 range_allocator.go:82] Sending events to api server.
	I0526 21:25:11.034096  527485 command_runner.go:124] ! I0526 21:23:52.211056       1 range_allocator.go:116] No Secondary Service CIDR provided. Skipping filtering out secondary service addresses.
	I0526 21:25:11.034110  527485 command_runner.go:124] ! I0526 21:23:52.211333       1 controllermanager.go:554] Started "nodeipam"
	I0526 21:25:11.034127  527485 command_runner.go:124] ! W0526 21:23:52.211708       1 core.go:246] configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes.
	I0526 21:25:11.034141  527485 command_runner.go:124] ! W0526 21:23:52.212021       1 controllermanager.go:546] Skipping "route"
	I0526 21:25:11.034153  527485 command_runner.go:124] ! I0526 21:23:52.212292       1 node_ipam_controller.go:159] Starting ipam controller
	I0526 21:25:11.034168  527485 command_runner.go:124] ! I0526 21:23:52.212876       1 shared_informer.go:240] Waiting for caches to sync for node
	I0526 21:25:11.034183  527485 command_runner.go:124] ! I0526 21:23:52.227871       1 node_lifecycle_controller.go:77] Sending events to api server
	I0526 21:25:11.034198  527485 command_runner.go:124] ! E0526 21:23:52.227991       1 core.go:232] failed to start cloud node lifecycle controller: no cloud provider provided
	I0526 21:25:11.034213  527485 command_runner.go:124] ! W0526 21:23:52.228003       1 controllermanager.go:546] Skipping "cloud-node-lifecycle"
	I0526 21:25:11.034227  527485 command_runner.go:124] ! I0526 21:23:52.257128       1 controllermanager.go:554] Started "persistentvolume-expander"
	I0526 21:25:11.034240  527485 command_runner.go:124] ! I0526 21:23:52.257967       1 expand_controller.go:310] Starting expand controller
	I0526 21:25:11.034253  527485 command_runner.go:124] ! I0526 21:23:52.258344       1 shared_informer.go:240] Waiting for caches to sync for expand
	I0526 21:25:11.034267  527485 command_runner.go:124] ! I0526 21:23:52.287731       1 controllermanager.go:554] Started "endpointslice"
	I0526 21:25:11.034282  527485 command_runner.go:124] ! I0526 21:23:52.287941       1 endpointslice_controller.go:237] Starting endpoint slice controller
	I0526 21:25:11.034298  527485 command_runner.go:124] ! I0526 21:23:52.287950       1 shared_informer.go:240] Waiting for caches to sync for endpoint_slice
	I0526 21:25:11.034311  527485 command_runner.go:124] ! I0526 21:23:52.334629       1 controllermanager.go:554] Started "daemonset"
	I0526 21:25:11.034323  527485 command_runner.go:124] ! I0526 21:23:52.334789       1 daemon_controller.go:285] Starting daemon sets controller
	I0526 21:25:11.034339  527485 command_runner.go:124] ! I0526 21:23:52.334797       1 shared_informer.go:240] Waiting for caches to sync for daemon sets
	I0526 21:25:11.034357  527485 command_runner.go:124] ! I0526 21:23:52.366633       1 controllermanager.go:554] Started "statefulset"
	I0526 21:25:11.034372  527485 command_runner.go:124] ! I0526 21:23:52.366920       1 stateful_set.go:146] Starting stateful set controller
	I0526 21:25:11.034389  527485 command_runner.go:124] ! I0526 21:23:52.367009       1 shared_informer.go:240] Waiting for caches to sync for stateful set
	I0526 21:25:11.034402  527485 command_runner.go:124] ! I0526 21:23:52.395670       1 controllermanager.go:554] Started "cronjob"
	I0526 21:25:11.034414  527485 command_runner.go:124] ! I0526 21:23:52.395842       1 cronjob_controller.go:96] Starting CronJob Manager
	I0526 21:25:11.034431  527485 command_runner.go:124] ! I0526 21:23:52.416080       1 certificate_controller.go:118] Starting certificate controller "csrsigning-kubelet-serving"
	I0526 21:25:11.034448  527485 command_runner.go:124] ! I0526 21:23:52.416256       1 shared_informer.go:240] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0526 21:25:11.034469  527485 command_runner.go:124] ! I0526 21:23:52.416385       1 dynamic_serving_content.go:130] Starting csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key
	I0526 21:25:11.034689  527485 command_runner.go:124] ! I0526 21:23:52.416862       1 certificate_controller.go:118] Starting certificate controller "csrsigning-kubelet-client"
	I0526 21:25:11.034710  527485 command_runner.go:124] ! I0526 21:23:52.416958       1 shared_informer.go:240] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0526 21:25:11.034726  527485 command_runner.go:124] ! I0526 21:23:52.416975       1 dynamic_serving_content.go:130] Starting csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key
	I0526 21:25:11.034741  527485 command_runner.go:124] ! I0526 21:23:52.417715       1 certificate_controller.go:118] Starting certificate controller "csrsigning-kube-apiserver-client"
	I0526 21:25:11.034765  527485 command_runner.go:124] ! I0526 21:23:52.417882       1 shared_informer.go:240] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0526 21:25:11.034785  527485 command_runner.go:124] ! I0526 21:23:52.418025       1 dynamic_serving_content.go:130] Starting csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key
	I0526 21:25:11.034800  527485 command_runner.go:124] ! I0526 21:23:52.418373       1 controllermanager.go:554] Started "csrsigning"
	I0526 21:25:11.034816  527485 command_runner.go:124] ! I0526 21:23:52.418419       1 certificate_controller.go:118] Starting certificate controller "csrsigning-legacy-unknown"
	I0526 21:25:11.034835  527485 command_runner.go:124] ! I0526 21:23:52.418799       1 dynamic_serving_content.go:130] Starting csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key
	I0526 21:25:11.034851  527485 command_runner.go:124] ! I0526 21:23:52.418805       1 shared_informer.go:240] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0526 21:25:11.034865  527485 command_runner.go:124] ! I0526 21:23:52.515732       1 controllermanager.go:554] Started "bootstrapsigner"
	I0526 21:25:11.034880  527485 command_runner.go:124] ! I0526 21:23:52.516431       1 shared_informer.go:240] Waiting for caches to sync for bootstrap_signer
	I0526 21:25:11.034896  527485 command_runner.go:124] ! I0526 21:23:52.765741       1 controllermanager.go:554] Started "replicationcontroller"
	I0526 21:25:11.034914  527485 command_runner.go:124] ! I0526 21:23:52.765769       1 replica_set.go:182] Starting replicationcontroller controller
	I0526 21:25:11.034931  527485 command_runner.go:124] ! I0526 21:23:52.765867       1 shared_informer.go:240] Waiting for caches to sync for ReplicationController
	I0526 21:25:11.034944  527485 command_runner.go:124] ! I0526 21:23:52.915756       1 node_lifecycle_controller.go:380] Sending events to api server.
	I0526 21:25:11.034958  527485 command_runner.go:124] ! I0526 21:23:52.916150       1 taint_manager.go:163] Sending events to api server.
	I0526 21:25:11.034972  527485 command_runner.go:124] ! I0526 21:23:52.916342       1 node_lifecycle_controller.go:508] Controller will reconcile labels.
	I0526 21:25:11.034987  527485 command_runner.go:124] ! I0526 21:23:52.916386       1 controllermanager.go:554] Started "nodelifecycle"
	I0526 21:25:11.035002  527485 command_runner.go:124] ! I0526 21:23:52.916749       1 node_lifecycle_controller.go:542] Starting node controller
	I0526 21:25:11.035017  527485 command_runner.go:124] ! I0526 21:23:52.916921       1 shared_informer.go:240] Waiting for caches to sync for taint
	I0526 21:25:11.035030  527485 command_runner.go:124] ! I0526 21:23:53.165965       1 controllermanager.go:554] Started "job"
	I0526 21:25:11.035042  527485 command_runner.go:124] ! I0526 21:23:53.166025       1 job_controller.go:148] Starting job controller
	I0526 21:25:11.035057  527485 command_runner.go:124] ! I0526 21:23:53.167211       1 shared_informer.go:240] Waiting for caches to sync for job
	I0526 21:25:11.035073  527485 command_runner.go:124] ! I0526 21:23:53.170385       1 shared_informer.go:240] Waiting for caches to sync for resource quota
	I0526 21:25:11.035099  527485 command_runner.go:124] ! W0526 21:23:53.178965       1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-20210526212238-510955" does not exist
	I0526 21:25:11.035117  527485 command_runner.go:124] ! I0526 21:23:53.213010       1 shared_informer.go:247] Caches are synced for node 
	I0526 21:25:11.035130  527485 command_runner.go:124] ! I0526 21:23:53.213735       1 range_allocator.go:172] Starting range CIDR allocator
	I0526 21:25:11.035145  527485 command_runner.go:124] ! I0526 21:23:53.214071       1 shared_informer.go:240] Waiting for caches to sync for cidrallocator
	I0526 21:25:11.035159  527485 command_runner.go:124] ! I0526 21:23:53.214233       1 shared_informer.go:247] Caches are synced for cidrallocator 
	I0526 21:25:11.035180  527485 command_runner.go:124] ! I0526 21:23:53.215982       1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring 
	I0526 21:25:11.035197  527485 command_runner.go:124] ! I0526 21:23:53.216587       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-serving 
	I0526 21:25:11.035213  527485 command_runner.go:124] ! I0526 21:23:53.217085       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-client 
	I0526 21:25:11.035248  527485 command_runner.go:124] ! I0526 21:23:53.217522       1 shared_informer.go:247] Caches are synced for bootstrap_signer 
	I0526 21:25:11.035267  527485 command_runner.go:124] ! I0526 21:23:53.218215       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kube-apiserver-client 
	I0526 21:25:11.035282  527485 command_runner.go:124] ! I0526 21:23:53.218891       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-legacy-unknown 
	I0526 21:25:11.035298  527485 command_runner.go:124] ! I0526 21:23:53.229560       1 shared_informer.go:247] Caches are synced for namespace 
	I0526 21:25:11.035313  527485 command_runner.go:124] ! I0526 21:23:53.235029       1 shared_informer.go:247] Caches are synced for daemon sets 
	I0526 21:25:11.035327  527485 command_runner.go:124] ! I0526 21:23:53.238654       1 shared_informer.go:247] Caches are synced for service account 
	I0526 21:25:11.035341  527485 command_runner.go:124] ! I0526 21:23:53.240824       1 shared_informer.go:247] Caches are synced for endpoint 
	I0526 21:25:11.035357  527485 command_runner.go:124] ! I0526 21:23:53.247379       1 shared_informer.go:247] Caches are synced for certificate-csrapproving 
	I0526 21:25:11.035373  527485 command_runner.go:124] ! I0526 21:23:53.251558       1 shared_informer.go:247] Caches are synced for PVC protection 
	I0526 21:25:11.035386  527485 command_runner.go:124] ! I0526 21:23:53.252699       1 shared_informer.go:247] Caches are synced for ReplicaSet 
	I0526 21:25:11.035397  527485 command_runner.go:124] ! I0526 21:23:53.256544       1 shared_informer.go:247] Caches are synced for TTL 
	I0526 21:25:11.035414  527485 command_runner.go:124] ! I0526 21:23:53.265652       1 range_allocator.go:373] Set node multinode-20210526212238-510955 PodCIDR to [10.244.0.0/24]
	I0526 21:25:11.035429  527485 command_runner.go:124] ! I0526 21:23:53.268627       1 shared_informer.go:247] Caches are synced for job 
	I0526 21:25:11.035444  527485 command_runner.go:124] ! I0526 21:23:53.268752       1 shared_informer.go:247] Caches are synced for stateful set 
	I0526 21:25:11.035458  527485 command_runner.go:124] ! I0526 21:23:53.290037       1 shared_informer.go:247] Caches are synced for crt configmap 
	I0526 21:25:11.035473  527485 command_runner.go:124] ! I0526 21:23:53.290226       1 shared_informer.go:247] Caches are synced for endpoint_slice 
	I0526 21:25:11.035486  527485 command_runner.go:124] ! I0526 21:23:53.292847       1 shared_informer.go:247] Caches are synced for deployment 
	I0526 21:25:11.035500  527485 command_runner.go:124] ! I0526 21:23:53.293728       1 shared_informer.go:247] Caches are synced for GC 
	I0526 21:25:11.035513  527485 command_runner.go:124] ! I0526 21:23:53.293879       1 shared_informer.go:247] Caches are synced for HPA 
	I0526 21:25:11.035529  527485 command_runner.go:124] ! I0526 21:23:53.293974       1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator 
	I0526 21:25:11.035544  527485 command_runner.go:124] ! I0526 21:23:53.317816       1 shared_informer.go:247] Caches are synced for taint 
	I0526 21:25:11.035557  527485 command_runner.go:124] ! I0526 21:23:53.317927       1 node_lifecycle_controller.go:1429] Initializing eviction metric for zone: 
	I0526 21:25:11.035574  527485 command_runner.go:124] ! W0526 21:23:53.318278       1 node_lifecycle_controller.go:1044] Missing timestamp for Node multinode-20210526212238-510955. Assuming now as a timestamp.
	I0526 21:25:11.035593  527485 command_runner.go:124] ! I0526 21:23:53.318396       1 node_lifecycle_controller.go:1195] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
	I0526 21:25:11.035608  527485 command_runner.go:124] ! I0526 21:23:53.318775       1 taint_manager.go:187] Starting NoExecuteTaintManager
	I0526 21:25:11.035637  527485 command_runner.go:124] ! I0526 21:23:53.319750       1 event.go:291] "Event occurred" object="multinode-20210526212238-510955" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-20210526212238-510955 event: Registered Node multinode-20210526212238-510955 in Controller"
	I0526 21:25:11.035664  527485 command_runner.go:124] ! I0526 21:23:53.337883       1 shared_informer.go:247] Caches are synced for disruption 
	I0526 21:25:11.035677  527485 command_runner.go:124] ! I0526 21:23:53.337896       1 disruption.go:339] Sending events to api server.
	I0526 21:25:11.035693  527485 command_runner.go:124] ! I0526 21:23:53.368948       1 shared_informer.go:247] Caches are synced for ReplicationController 
	I0526 21:25:11.035715  527485 command_runner.go:124] ! I0526 21:23:53.431193       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-2wgbs"
	I0526 21:25:11.035740  527485 command_runner.go:124] ! I0526 21:23:53.431223       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-74ff55c5b to 2"
	I0526 21:25:11.035755  527485 command_runner.go:124] ! I0526 21:23:53.459736       1 shared_informer.go:247] Caches are synced for expand 
	I0526 21:25:11.035773  527485 command_runner.go:124] ! I0526 21:23:53.479631       1 shared_informer.go:247] Caches are synced for resource quota 
	I0526 21:25:11.035789  527485 command_runner.go:124] ! I0526 21:23:53.487838       1 shared_informer.go:247] Caches are synced for PV protection 
	I0526 21:25:11.035802  527485 command_runner.go:124] ! I0526 21:23:53.489356       1 shared_informer.go:247] Caches are synced for attach detach 
	I0526 21:25:11.035818  527485 command_runner.go:124] ! I0526 21:23:53.494672       1 shared_informer.go:247] Caches are synced for resource quota 
	I0526 21:25:11.035834  527485 command_runner.go:124] ! I0526 21:23:53.539359       1 shared_informer.go:247] Caches are synced for persistent volume 
	I0526 21:25:11.035858  527485 command_runner.go:124] ! I0526 21:23:53.545401       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-qbl42"
	I0526 21:25:11.035883  527485 command_runner.go:124] ! I0526 21:23:53.545422       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-z56bv"
	I0526 21:25:11.035907  527485 command_runner.go:124] ! I0526 21:23:53.556102       1 event.go:291] "Event occurred" object="kube-system/kube-apiserver-multinode-20210526212238-510955" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0526 21:25:11.035932  527485 command_runner.go:124] ! I0526 21:23:53.567036       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-tw67b"
	I0526 21:25:11.035960  527485 command_runner.go:124] ! E0526 21:23:53.635384       1 clusterroleaggregation_controller.go:181] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
	I0526 21:25:11.035977  527485 command_runner.go:124] ! I0526 21:23:53.689947       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	I0526 21:25:11.036002  527485 command_runner.go:124] ! I0526 21:23:53.733785       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-74ff55c5b to 1"
	I0526 21:25:11.036029  527485 command_runner.go:124] ! I0526 21:23:53.758013       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-74ff55c5b-z56bv"
	I0526 21:25:11.036045  527485 command_runner.go:124] ! I0526 21:23:53.906201       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0526 21:25:11.036058  527485 command_runner.go:124] ! I0526 21:23:53.937294       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0526 21:25:11.036078  527485 command_runner.go:124] ! I0526 21:23:53.937309       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0526 21:25:11.036097  527485 command_runner.go:124] ! I0526 21:24:08.320331       1 node_lifecycle_controller.go:1222] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I0526 21:25:11.045061  527485 logs.go:123] Gathering logs for container status ...
	I0526 21:25:11.045080  527485 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0526 21:25:11.070698  527485 command_runner.go:124] > CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	I0526 21:25:11.070728  527485 command_runner.go:124] > a9593dff4428d       bfe3a36ebd252       About a minute ago   Running             coredns                   0                   1d96eb581f035
	I0526 21:25:11.070742  527485 command_runner.go:124] > 5d3df8c94eaed       6e38f40d628db       About a minute ago   Running             storage-provisioner       0                   722b1b257c571
	I0526 21:25:11.070769  527485 command_runner.go:124] > 69df1859ce4d1       6de166512aa22       About a minute ago   Running             kindnet-cni               0                   53490c652b9e5
	I0526 21:25:11.070809  527485 command_runner.go:124] > de6efc6fec4b2       43154ddb57a83       About a minute ago   Running             kube-proxy                0                   038c42970362d
	I0526 21:25:11.070830  527485 command_runner.go:124] > c8538106e966b       0369cf4303ffd       About a minute ago   Running             etcd                      0                   2ad404c6a9c44
	I0526 21:25:11.070846  527485 command_runner.go:124] > e6bb9bee7539a       ed2c44fbdd78b       About a minute ago   Running             kube-scheduler            0                   24fd8b8599a6e
	I0526 21:25:11.070870  527485 command_runner.go:124] > 2314e41b1b443       a27166429d98e       About a minute ago   Running             kube-controller-manager   0                   73ada73fbbf0b
	I0526 21:25:11.070910  527485 command_runner.go:124] > a0581c0e5409b       a8c2fdb8bf76e       About a minute ago   Running             kube-apiserver            0                   fe43674906f20
	I0526 21:25:13.573645  527485 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0526 21:25:13.584618  527485 command_runner.go:124] > 2592
	I0526 21:25:13.584780  527485 api_server.go:70] duration metric: took 1m18.776650446s to wait for apiserver process to appear ...
	I0526 21:25:13.584801  527485 api_server.go:86] waiting for apiserver healthz status ...
	I0526 21:25:13.584833  527485 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0526 21:25:13.584911  527485 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0526 21:25:13.604799  527485 command_runner.go:124] > a0581c0e5409bf38a71c61d4cf776a7fcd3dc38c24a5a6a52165552afd2bc85c
	I0526 21:25:13.604832  527485 cri.go:76] found id: "a0581c0e5409bf38a71c61d4cf776a7fcd3dc38c24a5a6a52165552afd2bc85c"
	I0526 21:25:13.604841  527485 cri.go:76] found id: ""
	I0526 21:25:13.604848  527485 logs.go:270] 1 containers: [a0581c0e5409bf38a71c61d4cf776a7fcd3dc38c24a5a6a52165552afd2bc85c]
	I0526 21:25:13.604907  527485 ssh_runner.go:149] Run: which crictl
	I0526 21:25:13.608770  527485 command_runner.go:124] > /bin/crictl
	I0526 21:25:13.608906  527485 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0526 21:25:13.608984  527485 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=etcd
	I0526 21:25:13.628454  527485 command_runner.go:124] > c8538106e966bebc2d596c1688fdb11c413a37e122bd18c454b4db5bec7f55ad
	I0526 21:25:13.628498  527485 cri.go:76] found id: "c8538106e966bebc2d596c1688fdb11c413a37e122bd18c454b4db5bec7f55ad"
	I0526 21:25:13.628507  527485 cri.go:76] found id: ""
	I0526 21:25:13.628514  527485 logs.go:270] 1 containers: [c8538106e966bebc2d596c1688fdb11c413a37e122bd18c454b4db5bec7f55ad]
	I0526 21:25:13.628558  527485 ssh_runner.go:149] Run: which crictl
	I0526 21:25:13.632594  527485 command_runner.go:124] > /bin/crictl
	I0526 21:25:13.632662  527485 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0526 21:25:13.632717  527485 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=coredns
	I0526 21:25:13.651568  527485 command_runner.go:124] > a9593dff4428d4d0f5cc31832cefb42bf39b62a98d32d295dd9502a1ef6d307a
	I0526 21:25:13.651683  527485 cri.go:76] found id: "a9593dff4428d4d0f5cc31832cefb42bf39b62a98d32d295dd9502a1ef6d307a"
	I0526 21:25:13.651700  527485 cri.go:76] found id: ""
	I0526 21:25:13.651707  527485 logs.go:270] 1 containers: [a9593dff4428d4d0f5cc31832cefb42bf39b62a98d32d295dd9502a1ef6d307a]
	I0526 21:25:13.651744  527485 ssh_runner.go:149] Run: which crictl
	I0526 21:25:13.655613  527485 command_runner.go:124] > /bin/crictl
	I0526 21:25:13.655703  527485 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0526 21:25:13.655747  527485 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0526 21:25:13.675717  527485 command_runner.go:124] > e6bb9bee7539aa8f41eb665706dd077c17b8c28a54090cf4fffe439978194c08
	I0526 21:25:13.675821  527485 cri.go:76] found id: "e6bb9bee7539aa8f41eb665706dd077c17b8c28a54090cf4fffe439978194c08"
	I0526 21:25:13.675836  527485 cri.go:76] found id: ""
	I0526 21:25:13.675841  527485 logs.go:270] 1 containers: [e6bb9bee7539aa8f41eb665706dd077c17b8c28a54090cf4fffe439978194c08]
	I0526 21:25:13.675871  527485 ssh_runner.go:149] Run: which crictl
	I0526 21:25:13.679599  527485 command_runner.go:124] > /bin/crictl
	I0526 21:25:13.679693  527485 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0526 21:25:13.679727  527485 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0526 21:25:13.699794  527485 command_runner.go:124] > de6efc6fec4b20903efb849817a9e57e284841a9fc6ef9bd1b83e563316cdaa2
	I0526 21:25:13.699810  527485 cri.go:76] found id: "de6efc6fec4b20903efb849817a9e57e284841a9fc6ef9bd1b83e563316cdaa2"
	I0526 21:25:13.699821  527485 cri.go:76] found id: ""
	I0526 21:25:13.699825  527485 logs.go:270] 1 containers: [de6efc6fec4b20903efb849817a9e57e284841a9fc6ef9bd1b83e563316cdaa2]
	I0526 21:25:13.699852  527485 ssh_runner.go:149] Run: which crictl
	I0526 21:25:13.703344  527485 command_runner.go:124] > /bin/crictl
	I0526 21:25:13.703637  527485 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0526 21:25:13.703683  527485 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0526 21:25:13.725683  527485 cri.go:76] found id: ""
	I0526 21:25:13.725697  527485 logs.go:270] 0 containers: []
	W0526 21:25:13.725702  527485 logs.go:272] No container was found matching "kubernetes-dashboard"
	I0526 21:25:13.725707  527485 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0526 21:25:13.725739  527485 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0526 21:25:13.743433  527485 command_runner.go:124] > 5d3df8c94eaedafbc7bb1e79012cbc72bc2ec88687f4bfc6f10fff8c1eb3107d
	I0526 21:25:13.743475  527485 cri.go:76] found id: "5d3df8c94eaedafbc7bb1e79012cbc72bc2ec88687f4bfc6f10fff8c1eb3107d"
	I0526 21:25:13.743486  527485 cri.go:76] found id: ""
	I0526 21:25:13.743493  527485 logs.go:270] 1 containers: [5d3df8c94eaedafbc7bb1e79012cbc72bc2ec88687f4bfc6f10fff8c1eb3107d]
	I0526 21:25:13.743519  527485 ssh_runner.go:149] Run: which crictl
	I0526 21:25:13.747149  527485 command_runner.go:124] > /bin/crictl
	I0526 21:25:13.747433  527485 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0526 21:25:13.747479  527485 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0526 21:25:13.765103  527485 command_runner.go:124] > 2314e41b1b44395e7b5a7c32d6859941d4a5db601171f0dd18ddefb571d25f18
	I0526 21:25:13.766695  527485 cri.go:76] found id: "2314e41b1b44395e7b5a7c32d6859941d4a5db601171f0dd18ddefb571d25f18"
	I0526 21:25:13.766712  527485 cri.go:76] found id: ""
	I0526 21:25:13.766719  527485 logs.go:270] 1 containers: [2314e41b1b44395e7b5a7c32d6859941d4a5db601171f0dd18ddefb571d25f18]
	I0526 21:25:13.766751  527485 ssh_runner.go:149] Run: which crictl
	I0526 21:25:13.770984  527485 command_runner.go:124] > /bin/crictl
	I0526 21:25:13.771048  527485 logs.go:123] Gathering logs for describe nodes ...
	I0526 21:25:13.771072  527485 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0526 21:25:13.911602  527485 command_runner.go:124] > Name:               multinode-20210526212238-510955
	I0526 21:25:13.911625  527485 command_runner.go:124] > Roles:              control-plane,master
	I0526 21:25:13.911633  527485 command_runner.go:124] > Labels:             beta.kubernetes.io/arch=amd64
	I0526 21:25:13.911641  527485 command_runner.go:124] >                     beta.kubernetes.io/os=linux
	I0526 21:25:13.911658  527485 command_runner.go:124] >                     kubernetes.io/arch=amd64
	I0526 21:25:13.911678  527485 command_runner.go:124] >                     kubernetes.io/hostname=multinode-20210526212238-510955
	I0526 21:25:13.911691  527485 command_runner.go:124] >                     kubernetes.io/os=linux
	I0526 21:25:13.911705  527485 command_runner.go:124] >                     minikube.k8s.io/commit=1440f8d7119ca73787e7dc88324b0d13449454ff
	I0526 21:25:13.911715  527485 command_runner.go:124] >                     minikube.k8s.io/name=multinode-20210526212238-510955
	I0526 21:25:13.911725  527485 command_runner.go:124] >                     minikube.k8s.io/updated_at=2021_05_26T21_23_38_0700
	I0526 21:25:13.911735  527485 command_runner.go:124] >                     minikube.k8s.io/version=v1.20.0
	I0526 21:25:13.911743  527485 command_runner.go:124] >                     node-role.kubernetes.io/control-plane=
	I0526 21:25:13.911751  527485 command_runner.go:124] >                     node-role.kubernetes.io/master=
	I0526 21:25:13.911775  527485 command_runner.go:124] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	I0526 21:25:13.911789  527485 command_runner.go:124] >                     node.alpha.kubernetes.io/ttl: 0
	I0526 21:25:13.911798  527485 command_runner.go:124] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0526 21:25:13.911810  527485 command_runner.go:124] > CreationTimestamp:  Wed, 26 May 2021 21:23:34 +0000
	I0526 21:25:13.911828  527485 command_runner.go:124] > Taints:             <none>
	I0526 21:25:13.911837  527485 command_runner.go:124] > Unschedulable:      false
	I0526 21:25:13.911840  527485 command_runner.go:124] > Lease:
	I0526 21:25:13.911852  527485 command_runner.go:124] >   HolderIdentity:  multinode-20210526212238-510955
	I0526 21:25:13.911864  527485 command_runner.go:124] >   AcquireTime:     <unset>
	I0526 21:25:13.911875  527485 command_runner.go:124] >   RenewTime:       Wed, 26 May 2021 21:25:04 +0000
	I0526 21:25:13.911886  527485 command_runner.go:124] > Conditions:
	I0526 21:25:13.911901  527485 command_runner.go:124] >   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	I0526 21:25:13.911916  527485 command_runner.go:124] >   ----             ------  -----------------                 ------------------                ------                       -------
	I0526 21:25:13.911930  527485 command_runner.go:124] >   MemoryPressure   False   Wed, 26 May 2021 21:24:14 +0000   Wed, 26 May 2021 21:23:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	I0526 21:25:13.911958  527485 command_runner.go:124] >   DiskPressure     False   Wed, 26 May 2021 21:24:14 +0000   Wed, 26 May 2021 21:23:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	I0526 21:25:13.911980  527485 command_runner.go:124] >   PIDPressure      False   Wed, 26 May 2021 21:24:14 +0000   Wed, 26 May 2021 21:23:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	I0526 21:25:13.912001  527485 command_runner.go:124] >   Ready            True    Wed, 26 May 2021 21:24:14 +0000   Wed, 26 May 2021 21:24:04 +0000   KubeletReady                 kubelet is posting ready status
	I0526 21:25:13.912009  527485 command_runner.go:124] > Addresses:
	I0526 21:25:13.912014  527485 command_runner.go:124] >   InternalIP:  192.168.39.229
	I0526 21:25:13.912025  527485 command_runner.go:124] >   Hostname:    multinode-20210526212238-510955
	I0526 21:25:13.912035  527485 command_runner.go:124] > Capacity:
	I0526 21:25:13.912043  527485 command_runner.go:124] >   cpu:                2
	I0526 21:25:13.912056  527485 command_runner.go:124] >   ephemeral-storage:  17784752Ki
	I0526 21:25:13.912068  527485 command_runner.go:124] >   hugepages-2Mi:      0
	I0526 21:25:13.912080  527485 command_runner.go:124] >   memory:             2186320Ki
	I0526 21:25:13.912091  527485 command_runner.go:124] >   pods:               110
	I0526 21:25:13.912101  527485 command_runner.go:124] > Allocatable:
	I0526 21:25:13.912106  527485 command_runner.go:124] >   cpu:                2
	I0526 21:25:13.912116  527485 command_runner.go:124] >   ephemeral-storage:  17784752Ki
	I0526 21:25:13.912126  527485 command_runner.go:124] >   hugepages-2Mi:      0
	I0526 21:25:13.912138  527485 command_runner.go:124] >   memory:             2186320Ki
	I0526 21:25:13.912150  527485 command_runner.go:124] >   pods:               110
	I0526 21:25:13.912160  527485 command_runner.go:124] > System Info:
	I0526 21:25:13.912172  527485 command_runner.go:124] >   Machine ID:                 fbd77f9e2b0d4ce7860fb21881bb7ff3
	I0526 21:25:13.912186  527485 command_runner.go:124] >   System UUID:                fbd77f9e-2b0d-4ce7-860f-b21881bb7ff3
	I0526 21:25:13.912199  527485 command_runner.go:124] >   Boot ID:                    9a60591c-de07-4474-bb32-101b0a9643ff
	I0526 21:25:13.912211  527485 command_runner.go:124] >   Kernel Version:             4.19.182
	I0526 21:25:13.912224  527485 command_runner.go:124] >   OS Image:                   Buildroot 2020.02.12
	I0526 21:25:13.912237  527485 command_runner.go:124] >   Operating System:           linux
	I0526 21:25:13.912247  527485 command_runner.go:124] >   Architecture:               amd64
	I0526 21:25:13.912259  527485 command_runner.go:124] >   Container Runtime Version:  containerd://1.4.4
	I0526 21:25:13.912271  527485 command_runner.go:124] >   Kubelet Version:            v1.20.2
	I0526 21:25:13.912281  527485 command_runner.go:124] >   Kube-Proxy Version:         v1.20.2
	I0526 21:25:13.912289  527485 command_runner.go:124] > PodCIDR:                      10.244.0.0/24
	I0526 21:25:13.912303  527485 command_runner.go:124] > PodCIDRs:                     10.244.0.0/24
	I0526 21:25:13.912316  527485 command_runner.go:124] > Non-terminated Pods:          (8 in total)
	I0526 21:25:13.912333  527485 command_runner.go:124] >   Namespace                   Name                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	I0526 21:25:13.912353  527485 command_runner.go:124] >   ---------                   ----                                                       ------------  ----------  ---------------  -------------  ---
	I0526 21:25:13.912369  527485 command_runner.go:124] >   kube-system                 coredns-74ff55c5b-tw67b                                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (7%!)(MISSING)     80s
	I0526 21:25:13.912388  527485 command_runner.go:124] >   kube-system                 etcd-multinode-20210526212238-510955                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         89s
	I0526 21:25:13.912409  527485 command_runner.go:124] >   kube-system                 kindnet-2wgbs                                              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      80s
	I0526 21:25:13.912429  527485 command_runner.go:124] >   kube-system                 kube-apiserver-multinode-20210526212238-510955             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         89s
	I0526 21:25:13.912481  527485 command_runner.go:124] >   kube-system                 kube-controller-manager-multinode-20210526212238-510955    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         89s
	I0526 21:25:13.912503  527485 command_runner.go:124] >   kube-system                 kube-proxy-qbl42                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         80s
	I0526 21:25:13.912520  527485 command_runner.go:124] >   kube-system                 kube-scheduler-multinode-20210526212238-510955             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         89s
	I0526 21:25:13.912538  527485 command_runner.go:124] >   kube-system                 storage-provisioner                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         78s
	I0526 21:25:13.912548  527485 command_runner.go:124] > Allocated resources:
	I0526 21:25:13.912560  527485 command_runner.go:124] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0526 21:25:13.912573  527485 command_runner.go:124] >   Resource           Requests     Limits
	I0526 21:25:13.912586  527485 command_runner.go:124] >   --------           --------     ------
	I0526 21:25:13.912598  527485 command_runner.go:124] >   cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	I0526 21:25:13.912611  527485 command_runner.go:124] >   memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	I0526 21:25:13.912625  527485 command_runner.go:124] >   ephemeral-storage  100Mi (0%!)(MISSING)   0 (0%!)(MISSING)
	I0526 21:25:13.912635  527485 command_runner.go:124] >   hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	I0526 21:25:13.912642  527485 command_runner.go:124] > Events:
	I0526 21:25:13.912657  527485 command_runner.go:124] >   Type    Reason                   Age                  From        Message
	I0526 21:25:13.912674  527485 command_runner.go:124] >   ----    ------                   ----                 ----        -------
	I0526 21:25:13.912690  527485 command_runner.go:124] >   Normal  Starting                 106s                 kubelet     Starting kubelet.
	I0526 21:25:13.912710  527485 command_runner.go:124] >   Normal  NodeHasSufficientMemory  105s (x4 over 106s)  kubelet     Node multinode-20210526212238-510955 status is now: NodeHasSufficientMemory
	I0526 21:25:13.912726  527485 command_runner.go:124] >   Normal  NodeHasNoDiskPressure    105s (x3 over 106s)  kubelet     Node multinode-20210526212238-510955 status is now: NodeHasNoDiskPressure
	I0526 21:25:13.912745  527485 command_runner.go:124] >   Normal  NodeHasSufficientPID     105s (x3 over 106s)  kubelet     Node multinode-20210526212238-510955 status is now: NodeHasSufficientPID
	I0526 21:25:13.912768  527485 command_runner.go:124] >   Normal  NodeAllocatableEnforced  105s                 kubelet     Updated Node Allocatable limit across pods
	I0526 21:25:13.912784  527485 command_runner.go:124] >   Normal  Starting                 90s                  kubelet     Starting kubelet.
	I0526 21:25:13.912801  527485 command_runner.go:124] >   Normal  NodeHasSufficientMemory  89s                  kubelet     Node multinode-20210526212238-510955 status is now: NodeHasSufficientMemory
	I0526 21:25:13.912820  527485 command_runner.go:124] >   Normal  NodeHasNoDiskPressure    89s                  kubelet     Node multinode-20210526212238-510955 status is now: NodeHasNoDiskPressure
	I0526 21:25:13.912840  527485 command_runner.go:124] >   Normal  NodeHasSufficientPID     89s                  kubelet     Node multinode-20210526212238-510955 status is now: NodeHasSufficientPID
	I0526 21:25:13.912858  527485 command_runner.go:124] >   Normal  NodeAllocatableEnforced  89s                  kubelet     Updated Node Allocatable limit across pods
	I0526 21:25:13.912890  527485 command_runner.go:124] >   Normal  Starting                 79s                  kube-proxy  Starting kube-proxy.
	I0526 21:25:13.912903  527485 command_runner.go:124] >   Normal  NodeReady                69s                  kubelet     Node multinode-20210526212238-510955 status is now: NodeReady
	I0526 21:25:13.919030  527485 logs.go:123] Gathering logs for etcd [c8538106e966bebc2d596c1688fdb11c413a37e122bd18c454b4db5bec7f55ad] ...
	I0526 21:25:13.919056  527485 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 c8538106e966bebc2d596c1688fdb11c413a37e122bd18c454b4db5bec7f55ad"
	I0526 21:25:13.939979  527485 command_runner.go:124] ! [WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead
	I0526 21:25:13.940392  527485 command_runner.go:124] ! 2021-05-26 21:23:30.145280 I | etcdmain: etcd Version: 3.4.13
	I0526 21:25:13.940467  527485 command_runner.go:124] ! 2021-05-26 21:23:30.146007 I | etcdmain: Git SHA: ae9734ed2
	I0526 21:25:13.940997  527485 command_runner.go:124] ! 2021-05-26 21:23:30.146359 I | etcdmain: Go Version: go1.12.17
	I0526 21:25:13.941588  527485 command_runner.go:124] ! 2021-05-26 21:23:30.146935 I | etcdmain: Go OS/Arch: linux/amd64
	I0526 21:25:13.941957  527485 command_runner.go:124] ! 2021-05-26 21:23:30.147549 I | etcdmain: setting maximum number of CPUs to 2, total number of available CPUs is 2
	I0526 21:25:13.942250  527485 command_runner.go:124] ! [WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead
	I0526 21:25:13.942621  527485 command_runner.go:124] ! 2021-05-26 21:23:30.148927 I | embed: peerTLS: cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	I0526 21:25:13.942715  527485 command_runner.go:124] ! 2021-05-26 21:23:30.159191 I | embed: name = multinode-20210526212238-510955
	I0526 21:25:13.943139  527485 command_runner.go:124] ! 2021-05-26 21:23:30.159781 I | embed: data dir = /var/lib/minikube/etcd
	I0526 21:25:13.943303  527485 command_runner.go:124] ! 2021-05-26 21:23:30.161368 I | embed: member dir = /var/lib/minikube/etcd/member
	I0526 21:25:13.943549  527485 command_runner.go:124] ! 2021-05-26 21:23:30.161781 I | embed: heartbeat = 100ms
	I0526 21:25:13.943727  527485 command_runner.go:124] ! 2021-05-26 21:23:30.162024 I | embed: election = 1000ms
	I0526 21:25:13.944038  527485 command_runner.go:124] ! 2021-05-26 21:23:30.162419 I | embed: snapshot count = 10000
	I0526 21:25:13.944440  527485 command_runner.go:124] ! 2021-05-26 21:23:30.162834 I | embed: advertise client URLs = https://192.168.39.229:2379
	I0526 21:25:13.944675  527485 command_runner.go:124] ! 2021-05-26 21:23:30.186657 I | etcdserver: starting member b8647f2870156d71 in cluster 2bfbf13ce68722b
	I0526 21:25:13.944752  527485 command_runner.go:124] ! raft2021/05/26 21:23:30 INFO: b8647f2870156d71 switched to configuration voters=()
	I0526 21:25:13.945145  527485 command_runner.go:124] ! raft2021/05/26 21:23:30 INFO: b8647f2870156d71 became follower at term 0
	I0526 21:25:13.945357  527485 command_runner.go:124] ! raft2021/05/26 21:23:30 INFO: newRaft b8647f2870156d71 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	I0526 21:25:13.945550  527485 command_runner.go:124] ! raft2021/05/26 21:23:30 INFO: b8647f2870156d71 became follower at term 1
	I0526 21:25:13.945736  527485 command_runner.go:124] ! raft2021/05/26 21:23:30 INFO: b8647f2870156d71 switched to configuration voters=(13286884612305677681)
	I0526 21:25:13.946210  527485 command_runner.go:124] ! 2021-05-26 21:23:30.205555 W | auth: simple token is not cryptographically signed
	I0526 21:25:13.946249  527485 command_runner.go:124] ! 2021-05-26 21:23:30.234208 I | etcdserver: starting server... [version: 3.4.13, cluster version: to_be_decided]
	I0526 21:25:13.946385  527485 command_runner.go:124] ! 2021-05-26 21:23:30.243414 I | etcdserver: b8647f2870156d71 as single-node; fast-forwarding 9 ticks (election ticks 10)
	I0526 21:25:13.946493  527485 command_runner.go:124] ! raft2021/05/26 21:23:30 INFO: b8647f2870156d71 switched to configuration voters=(13286884612305677681)
	I0526 21:25:13.946578  527485 command_runner.go:124] ! 2021-05-26 21:23:30.255082 I | etcdserver/membership: added member b8647f2870156d71 [https://192.168.39.229:2380] to cluster 2bfbf13ce68722b
	I0526 21:25:13.946860  527485 command_runner.go:124] ! 2021-05-26 21:23:30.261097 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	I0526 21:25:13.946923  527485 command_runner.go:124] ! 2021-05-26 21:23:30.264526 I | embed: listening for peers on 192.168.39.229:2380
	I0526 21:25:13.947147  527485 command_runner.go:124] ! 2021-05-26 21:23:30.264701 I | embed: listening for metrics on http://127.0.0.1:2381
	I0526 21:25:13.947519  527485 command_runner.go:124] ! raft2021/05/26 21:23:30 INFO: b8647f2870156d71 is starting a new election at term 1
	I0526 21:25:13.947533  527485 command_runner.go:124] ! raft2021/05/26 21:23:30 INFO: b8647f2870156d71 became candidate at term 2
	I0526 21:25:13.947545  527485 command_runner.go:124] ! raft2021/05/26 21:23:30 INFO: b8647f2870156d71 received MsgVoteResp from b8647f2870156d71 at term 2
	I0526 21:25:13.947555  527485 command_runner.go:124] ! raft2021/05/26 21:23:30 INFO: b8647f2870156d71 became leader at term 2
	I0526 21:25:13.947573  527485 command_runner.go:124] ! raft2021/05/26 21:23:30 INFO: raft.node: b8647f2870156d71 elected leader b8647f2870156d71 at term 2
	I0526 21:25:13.947585  527485 command_runner.go:124] ! 2021-05-26 21:23:30.893688 I | etcdserver: setting up the initial cluster version to 3.4
	I0526 21:25:13.947598  527485 command_runner.go:124] ! 2021-05-26 21:23:30.897562 I | embed: ready to serve client requests
	I0526 21:25:13.947615  527485 command_runner.go:124] ! 2021-05-26 21:23:30.897893 I | etcdserver: published {Name:multinode-20210526212238-510955 ClientURLs:[https://192.168.39.229:2379]} to cluster 2bfbf13ce68722b
	I0526 21:25:13.947629  527485 command_runner.go:124] ! 2021-05-26 21:23:30.898097 I | embed: ready to serve client requests
	I0526 21:25:13.947641  527485 command_runner.go:124] ! 2021-05-26 21:23:30.904911 I | embed: serving client requests on 127.0.0.1:2379
	I0526 21:25:13.947653  527485 command_runner.go:124] ! 2021-05-26 21:23:30.925406 I | embed: serving client requests on 192.168.39.229:2379
	I0526 21:25:13.947666  527485 command_runner.go:124] ! 2021-05-26 21:23:30.930764 N | etcdserver/membership: set the initial cluster version to 3.4
	I0526 21:25:13.947680  527485 command_runner.go:124] ! 2021-05-26 21:23:30.973015 I | etcdserver/api: enabled capabilities for version 3.4
	I0526 21:25:13.947701  527485 command_runner.go:124] ! 2021-05-26 21:23:35.005110 W | etcdserver: read-only range request "key:\"/registry/ranges/servicenodeports\" " with result "range_response_count:0 size:4" took too long (158.136927ms) to execute
	I0526 21:25:13.947727  527485 command_runner.go:124] ! 2021-05-26 21:23:35.008540 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/etcd-multinode-20210526212238-510955\" " with result "range_response_count:0 size:4" took too long (159.3133ms) to execute
	I0526 21:25:13.947748  527485 command_runner.go:124] ! 2021-05-26 21:23:35.012635 W | etcdserver: read-only range request "key:\"/registry/namespaces/kube-system\" " with result "range_response_count:0 size:4" took too long (107.936302ms) to execute
	I0526 21:25:13.947773  527485 command_runner.go:124] ! 2021-05-26 21:23:35.013064 W | etcdserver: read-only range request "key:\"/registry/csinodes/multinode-20210526212238-510955\" " with result "range_response_count:0 size:4" took too long (148.811077ms) to execute
	I0526 21:25:13.947795  527485 command_runner.go:124] ! 2021-05-26 21:23:35.013577 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:4" took too long (157.477156ms) to execute
	I0526 21:25:13.947811  527485 command_runner.go:124] ! 2021-05-26 21:23:48.034379 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	I0526 21:25:13.947829  527485 command_runner.go:124] ! 2021-05-26 21:23:50.916831 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	I0526 21:25:13.947841  527485 command_runner.go:124] ! 2021-05-26 21:24:00.917857 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	I0526 21:25:13.947853  527485 command_runner.go:124] ! 2021-05-26 21:24:10.918220 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	I0526 21:25:13.947867  527485 command_runner.go:124] ! 2021-05-26 21:24:20.917896 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	I0526 21:25:13.947880  527485 command_runner.go:124] ! 2021-05-26 21:24:30.916918 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	I0526 21:25:13.947894  527485 command_runner.go:124] ! 2021-05-26 21:24:40.917190 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	I0526 21:25:13.947907  527485 command_runner.go:124] ! 2021-05-26 21:24:50.917237 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	I0526 21:25:13.947921  527485 command_runner.go:124] ! 2021-05-26 21:25:00.916673 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	I0526 21:25:13.947934  527485 command_runner.go:124] ! 2021-05-26 21:25:10.921256 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	I0526 21:25:13.951431  527485 logs.go:123] Gathering logs for storage-provisioner [5d3df8c94eaedafbc7bb1e79012cbc72bc2ec88687f4bfc6f10fff8c1eb3107d] ...
	I0526 21:25:13.951452  527485 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 5d3df8c94eaedafbc7bb1e79012cbc72bc2ec88687f4bfc6f10fff8c1eb3107d"
	I0526 21:25:13.971535  527485 command_runner.go:124] ! I0526 21:24:10.174152       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0526 21:25:13.971977  527485 command_runner.go:124] ! I0526 21:24:10.283423       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0526 21:25:13.972343  527485 command_runner.go:124] ! I0526 21:24:10.285296       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0526 21:25:13.972396  527485 command_runner.go:124] ! I0526 21:24:10.325709       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0526 21:25:13.972669  527485 command_runner.go:124] ! I0526 21:24:10.333080       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_multinode-20210526212238-510955_640f1575-3f2b-423b-9f51-48a3198dc1b4!
	I0526 21:25:13.972740  527485 command_runner.go:124] ! I0526 21:24:10.329407       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"694e5be2-46cf-4c76-aeac-70628468e6a3", APIVersion:"v1", ResourceVersion:"496", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' multinode-20210526212238-510955_640f1575-3f2b-423b-9f51-48a3198dc1b4 became leader
	I0526 21:25:13.973145  527485 command_runner.go:124] ! I0526 21:24:10.440994       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_multinode-20210526212238-510955_640f1575-3f2b-423b-9f51-48a3198dc1b4!
	I0526 21:25:13.974698  527485 logs.go:123] Gathering logs for container status ...
	I0526 21:25:13.974714  527485 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0526 21:25:14.003527  527485 command_runner.go:124] > CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	I0526 21:25:14.003548  527485 command_runner.go:124] > a9593dff4428d       bfe3a36ebd252       About a minute ago   Running             coredns                   0                   1d96eb581f035
	I0526 21:25:14.003558  527485 command_runner.go:124] > 5d3df8c94eaed       6e38f40d628db       About a minute ago   Running             storage-provisioner       0                   722b1b257c571
	I0526 21:25:14.003570  527485 command_runner.go:124] > 69df1859ce4d1       6de166512aa22       About a minute ago   Running             kindnet-cni               0                   53490c652b9e5
	I0526 21:25:14.003582  527485 command_runner.go:124] > de6efc6fec4b2       43154ddb57a83       About a minute ago   Running             kube-proxy                0                   038c42970362d
	I0526 21:25:14.003592  527485 command_runner.go:124] > c8538106e966b       0369cf4303ffd       About a minute ago   Running             etcd                      0                   2ad404c6a9c44
	I0526 21:25:14.003604  527485 command_runner.go:124] > e6bb9bee7539a       ed2c44fbdd78b       About a minute ago   Running             kube-scheduler            0                   24fd8b8599a6e
	I0526 21:25:14.003618  527485 command_runner.go:124] > 2314e41b1b443       a27166429d98e       About a minute ago   Running             kube-controller-manager   0                   73ada73fbbf0b
	I0526 21:25:14.003630  527485 command_runner.go:124] > a0581c0e5409b       a8c2fdb8bf76e       About a minute ago   Running             kube-apiserver            0                   fe43674906f20
	I0526 21:25:14.004139  527485 logs.go:123] Gathering logs for dmesg ...
	I0526 21:25:14.004150  527485 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0526 21:25:14.014722  527485 command_runner.go:124] > [May26 21:22] You have booted with nomodeset. This means your GPU drivers are DISABLED
	I0526 21:25:14.014739  527485 command_runner.go:124] > [  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	I0526 21:25:14.014755  527485 command_runner.go:124] > [  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	I0526 21:25:14.014769  527485 command_runner.go:124] > [  +0.092301] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	I0526 21:25:14.014781  527485 command_runner.go:124] > [  +3.726361] Unstable clock detected, switching default tracing clock to "global"
	I0526 21:25:14.014792  527485 command_runner.go:124] >               If you want to keep using the local clock, then add:
	I0526 21:25:14.014797  527485 command_runner.go:124] >                 "trace_clock=local"
	I0526 21:25:14.014803  527485 command_runner.go:124] >               on the kernel command line
	I0526 21:25:14.014811  527485 command_runner.go:124] > [  +0.000018] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	I0526 21:25:14.014819  527485 command_runner.go:124] > [  +3.393840] systemd-fstab-generator[1161]: Ignoring "noauto" for root device
	I0526 21:25:14.014829  527485 command_runner.go:124] > [  +0.034647] systemd[1]: system-getty.slice: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	I0526 21:25:14.014840  527485 command_runner.go:124] > [  +0.000003] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	I0526 21:25:14.014856  527485 command_runner.go:124] > [  +0.775022] SELinux: unrecognized netlink message: protocol=0 nlmsg_type=106 sclass=netlink_route_socket pid=1723 comm=systemd-network
	I0526 21:25:14.014871  527485 command_runner.go:124] > [  +1.684954] vboxguest: loading out-of-tree module taints kernel.
	I0526 21:25:14.014883  527485 command_runner.go:124] > [  +0.006011] vboxguest: PCI device not found, probably running on physical hardware.
	I0526 21:25:14.014898  527485 command_runner.go:124] > [  +1.532510] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack.
	I0526 21:25:14.014908  527485 command_runner.go:124] > [May26 21:23] systemd-fstab-generator[2097]: Ignoring "noauto" for root device
	I0526 21:25:14.014914  527485 command_runner.go:124] > [  +0.282151] systemd-fstab-generator[2145]: Ignoring "noauto" for root device
	I0526 21:25:14.014924  527485 command_runner.go:124] > [  +9.202259] systemd-fstab-generator[2335]: Ignoring "noauto" for root device
	I0526 21:25:14.014930  527485 command_runner.go:124] > [ +16.373129] systemd-fstab-generator[2754]: Ignoring "noauto" for root device
	I0526 21:25:14.014938  527485 command_runner.go:124] > [ +16.598445] kauditd_printk_skb: 38 callbacks suppressed
	I0526 21:25:14.014944  527485 command_runner.go:124] > [May26 21:24] kauditd_printk_skb: 50 callbacks suppressed
	I0526 21:25:14.014957  527485 command_runner.go:124] > [ +45.152218] NFSD: Unable to end grace period: -110
	I0526 21:25:14.015833  527485 logs.go:123] Gathering logs for kube-apiserver [a0581c0e5409bf38a71c61d4cf776a7fcd3dc38c24a5a6a52165552afd2bc85c] ...
	I0526 21:25:14.015845  527485 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 a0581c0e5409bf38a71c61d4cf776a7fcd3dc38c24a5a6a52165552afd2bc85c"
	I0526 21:25:14.048160  527485 command_runner.go:124] ! Flag --insecure-port has been deprecated, This flag has no effect now and will be removed in v1.24.
	I0526 21:25:14.048183  527485 command_runner.go:124] ! I0526 21:23:29.805604       1 server.go:632] external host was not specified, using 192.168.39.229
	I0526 21:25:14.048193  527485 command_runner.go:124] ! I0526 21:23:29.806982       1 server.go:182] Version: v1.20.2
	I0526 21:25:14.048205  527485 command_runner.go:124] ! I0526 21:23:30.593640       1 shared_informer.go:240] Waiting for caches to sync for node_authorizer
	I0526 21:25:14.048237  527485 command_runner.go:124] ! I0526 21:23:30.598821       1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0526 21:25:14.048267  527485 command_runner.go:124] ! I0526 21:23:30.598945       1 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
	I0526 21:25:14.048310  527485 command_runner.go:124] ! I0526 21:23:30.600954       1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0526 21:25:14.048341  527485 command_runner.go:124] ! I0526 21:23:30.601309       1 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
	I0526 21:25:14.048356  527485 command_runner.go:124] ! I0526 21:23:30.616590       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.048373  527485 command_runner.go:124] ! I0526 21:23:30.617065       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.048386  527485 command_runner.go:124] ! I0526 21:23:30.995013       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.048406  527485 command_runner.go:124] ! I0526 21:23:30.995139       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.048420  527485 command_runner.go:124] ! I0526 21:23:31.030659       1 client.go:360] parsed scheme: "passthrough"
	I0526 21:25:14.048436  527485 command_runner.go:124] ! I0526 21:23:31.031231       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0526 21:25:14.048450  527485 command_runner.go:124] ! I0526 21:23:31.031324       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0526 21:25:14.048461  527485 command_runner.go:124] ! I0526 21:23:31.032369       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.048478  527485 command_runner.go:124] ! I0526 21:23:31.032725       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.048491  527485 command_runner.go:124] ! I0526 21:23:31.143094       1 instance.go:289] Using reconciler: lease
	I0526 21:25:14.048501  527485 command_runner.go:124] ! I0526 21:23:31.148814       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.048516  527485 command_runner.go:124] ! I0526 21:23:31.148936       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.048528  527485 command_runner.go:124] ! I0526 21:23:31.164327       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.048542  527485 command_runner.go:124] ! I0526 21:23:31.164627       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.048555  527485 command_runner.go:124] ! I0526 21:23:31.183831       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.048570  527485 command_runner.go:124] ! I0526 21:23:31.184185       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.048581  527485 command_runner.go:124] ! I0526 21:23:31.203621       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.048598  527485 command_runner.go:124] ! I0526 21:23:31.204140       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.048608  527485 command_runner.go:124] ! I0526 21:23:31.218608       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.048626  527485 command_runner.go:124] ! I0526 21:23:31.218929       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.048636  527485 command_runner.go:124] ! I0526 21:23:31.235670       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.048651  527485 command_runner.go:124] ! I0526 21:23:31.235780       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.048663  527485 command_runner.go:124] ! I0526 21:23:31.248767       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.048682  527485 command_runner.go:124] ! I0526 21:23:31.248973       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.048700  527485 command_runner.go:124] ! I0526 21:23:31.270717       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.048717  527485 command_runner.go:124] ! I0526 21:23:31.272045       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.048727  527485 command_runner.go:124] ! I0526 21:23:31.287807       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.048741  527485 command_runner.go:124] ! I0526 21:23:31.288158       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.048755  527485 command_runner.go:124] ! I0526 21:23:31.302175       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.048772  527485 command_runner.go:124] ! I0526 21:23:31.302294       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.048783  527485 command_runner.go:124] ! I0526 21:23:31.318788       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.048798  527485 command_runner.go:124] ! I0526 21:23:31.318898       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.048810  527485 command_runner.go:124] ! I0526 21:23:31.340681       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.048825  527485 command_runner.go:124] ! I0526 21:23:31.341103       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.048838  527485 command_runner.go:124] ! I0526 21:23:31.364875       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.048852  527485 command_runner.go:124] ! I0526 21:23:31.365260       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.048878  527485 command_runner.go:124] ! I0526 21:23:31.375229       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.048896  527485 command_runner.go:124] ! I0526 21:23:31.375353       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.048908  527485 command_runner.go:124] ! I0526 21:23:31.384385       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.048922  527485 command_runner.go:124] ! I0526 21:23:31.384585       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.048934  527485 command_runner.go:124] ! I0526 21:23:31.392770       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.048950  527485 command_runner.go:124] ! I0526 21:23:31.392939       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.048961  527485 command_runner.go:124] ! I0526 21:23:31.406398       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.048976  527485 command_runner.go:124] ! I0526 21:23:31.406589       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.048986  527485 command_runner.go:124] ! I0526 21:23:31.421828       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.049000  527485 command_runner.go:124] ! I0526 21:23:31.422392       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.049015  527485 command_runner.go:124] ! I0526 21:23:31.434772       1 rest.go:131] the default service ipfamily for this cluster is: IPv4
	I0526 21:25:14.049026  527485 command_runner.go:124] ! I0526 21:23:31.530123       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.049042  527485 command_runner.go:124] ! I0526 21:23:31.530234       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.049053  527485 command_runner.go:124] ! I0526 21:23:31.542917       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.049069  527485 command_runner.go:124] ! I0526 21:23:31.543258       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.049078  527485 command_runner.go:124] ! I0526 21:23:31.558871       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.049092  527485 command_runner.go:124] ! I0526 21:23:31.558975       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.049105  527485 command_runner.go:124] ! I0526 21:23:31.578311       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.049121  527485 command_runner.go:124] ! I0526 21:23:31.578428       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.049132  527485 command_runner.go:124] ! I0526 21:23:31.579212       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.049147  527485 command_runner.go:124] ! I0526 21:23:31.579406       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.049158  527485 command_runner.go:124] ! I0526 21:23:31.593279       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.049183  527485 command_runner.go:124] ! I0526 21:23:31.593392       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.049196  527485 command_runner.go:124] ! I0526 21:23:31.609260       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.049211  527485 command_runner.go:124] ! I0526 21:23:31.609368       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.049221  527485 command_runner.go:124] ! I0526 21:23:31.626851       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.049236  527485 command_runner.go:124] ! I0526 21:23:31.626960       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.049251  527485 command_runner.go:124] ! I0526 21:23:31.653023       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.049286  527485 command_runner.go:124] ! I0526 21:23:31.653138       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.049299  527485 command_runner.go:124] ! I0526 21:23:31.662951       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.049315  527485 command_runner.go:124] ! I0526 21:23:31.663349       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.049326  527485 command_runner.go:124] ! I0526 21:23:31.683106       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.049340  527485 command_runner.go:124] ! I0526 21:23:31.684613       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.049350  527485 command_runner.go:124] ! I0526 21:23:31.700741       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.049364  527485 command_runner.go:124] ! I0526 21:23:31.701266       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.049377  527485 command_runner.go:124] ! I0526 21:23:31.722045       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.049392  527485 command_runner.go:124] ! I0526 21:23:31.722235       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.049405  527485 command_runner.go:124] ! I0526 21:23:31.736295       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.049420  527485 command_runner.go:124] ! I0526 21:23:31.737071       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.049431  527485 command_runner.go:124] ! I0526 21:23:31.751086       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.049445  527485 command_runner.go:124] ! I0526 21:23:31.751202       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.049455  527485 command_runner.go:124] ! I0526 21:23:31.767941       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.049473  527485 command_runner.go:124] ! I0526 21:23:31.768045       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.049487  527485 command_runner.go:124] ! I0526 21:23:31.784917       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.049503  527485 command_runner.go:124] ! I0526 21:23:31.785029       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.049513  527485 command_runner.go:124] ! I0526 21:23:31.802204       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.049527  527485 command_runner.go:124] ! I0526 21:23:31.802314       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.049540  527485 command_runner.go:124] ! I0526 21:23:31.817427       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.049555  527485 command_runner.go:124] ! I0526 21:23:31.817616       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.049567  527485 command_runner.go:124] ! I0526 21:23:31.837841       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.049582  527485 command_runner.go:124] ! I0526 21:23:31.837939       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.049593  527485 command_runner.go:124] ! I0526 21:23:31.860217       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.049608  527485 command_runner.go:124] ! I0526 21:23:31.861221       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.049620  527485 command_runner.go:124] ! I0526 21:23:31.871254       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.049635  527485 command_runner.go:124] ! I0526 21:23:31.872836       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.049645  527485 command_runner.go:124] ! I0526 21:23:31.884052       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.049662  527485 command_runner.go:124] ! I0526 21:23:31.884160       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.049675  527485 command_runner.go:124] ! I0526 21:23:31.898818       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.049690  527485 command_runner.go:124] ! I0526 21:23:31.898925       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.049701  527485 command_runner.go:124] ! I0526 21:23:31.913046       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.049715  527485 command_runner.go:124] ! I0526 21:23:31.913149       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.049730  527485 command_runner.go:124] ! I0526 21:23:31.925884       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.049745  527485 command_runner.go:124] ! I0526 21:23:31.925994       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.049758  527485 command_runner.go:124] ! I0526 21:23:31.939143       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.049772  527485 command_runner.go:124] ! I0526 21:23:31.939253       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.049782  527485 command_runner.go:124] ! I0526 21:23:31.954393       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.049799  527485 command_runner.go:124] ! I0526 21:23:31.956005       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.049812  527485 command_runner.go:124] ! I0526 21:23:31.964255       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.049828  527485 command_runner.go:124] ! I0526 21:23:31.964369       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.049839  527485 command_runner.go:124] ! I0526 21:23:31.980824       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.049853  527485 command_runner.go:124] ! I0526 21:23:31.980931       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.049866  527485 command_runner.go:124] ! I0526 21:23:31.998875       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.049881  527485 command_runner.go:124] ! I0526 21:23:31.998978       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.049891  527485 command_runner.go:124] ! I0526 21:23:32.014057       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.049905  527485 command_runner.go:124] ! I0526 21:23:32.014169       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.049918  527485 command_runner.go:124] ! I0526 21:23:32.027301       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.049934  527485 command_runner.go:124] ! I0526 21:23:32.027633       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.049944  527485 command_runner.go:124] ! I0526 21:23:32.046160       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.049959  527485 command_runner.go:124] ! I0526 21:23:32.046890       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.049969  527485 command_runner.go:124] ! I0526 21:23:32.068538       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.049984  527485 command_runner.go:124] ! I0526 21:23:32.069814       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.049997  527485 command_runner.go:124] ! I0526 21:23:32.087119       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.050013  527485 command_runner.go:124] ! I0526 21:23:32.087547       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.050024  527485 command_runner.go:124] ! I0526 21:23:32.097832       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.050039  527485 command_runner.go:124] ! I0526 21:23:32.097940       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.050050  527485 command_runner.go:124] ! I0526 21:23:32.107249       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.050067  527485 command_runner.go:124] ! I0526 21:23:32.107932       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.050077  527485 command_runner.go:124] ! I0526 21:23:32.119796       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.050092  527485 command_runner.go:124] ! I0526 21:23:32.119897       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.050104  527485 command_runner.go:124] ! I0526 21:23:32.128209       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.050120  527485 command_runner.go:124] ! I0526 21:23:32.128321       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.050130  527485 command_runner.go:124] ! I0526 21:23:32.138008       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.050145  527485 command_runner.go:124] ! I0526 21:23:32.138111       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.050155  527485 command_runner.go:124] ! I0526 21:23:32.160727       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.050170  527485 command_runner.go:124] ! I0526 21:23:32.160833       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.050183  527485 command_runner.go:124] ! I0526 21:23:32.186843       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.050197  527485 command_runner.go:124] ! I0526 21:23:32.186949       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.050208  527485 command_runner.go:124] ! I0526 21:23:32.198121       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.050222  527485 command_runner.go:124] ! I0526 21:23:32.198232       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.050235  527485 command_runner.go:124] ! I0526 21:23:32.206015       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.050252  527485 command_runner.go:124] ! I0526 21:23:32.206127       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.050266  527485 command_runner.go:124] ! I0526 21:23:32.222761       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.050286  527485 command_runner.go:124] ! I0526 21:23:32.223204       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.050299  527485 command_runner.go:124] ! I0526 21:23:32.232528       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.050313  527485 command_runner.go:124] ! I0526 21:23:32.232629       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.050325  527485 command_runner.go:124] ! I0526 21:23:32.245897       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.050341  527485 command_runner.go:124] ! I0526 21:23:32.246007       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.050354  527485 command_runner.go:124] ! I0526 21:23:32.263847       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.050422  527485 command_runner.go:124] ! I0526 21:23:32.263950       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.050436  527485 command_runner.go:124] ! I0526 21:23:32.275996       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.050451  527485 command_runner.go:124] ! I0526 21:23:32.276100       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.050462  527485 command_runner.go:124] ! I0526 21:23:32.286992       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.050479  527485 command_runner.go:124] ! I0526 21:23:32.288760       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.050492  527485 command_runner.go:124] ! I0526 21:23:32.300558       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.050508  527485 command_runner.go:124] ! I0526 21:23:32.300656       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.050521  527485 command_runner.go:124] ! W0526 21:23:32.466350       1 genericapiserver.go:419] Skipping API batch/v2alpha1 because it has no resources.
	I0526 21:25:14.050535  527485 command_runner.go:124] ! W0526 21:23:32.475974       1 genericapiserver.go:419] Skipping API discovery.k8s.io/v1alpha1 because it has no resources.
	I0526 21:25:14.050549  527485 command_runner.go:124] ! W0526 21:23:32.486620       1 genericapiserver.go:419] Skipping API node.k8s.io/v1alpha1 because it has no resources.
	I0526 21:25:14.050565  527485 command_runner.go:124] ! W0526 21:23:32.495038       1 genericapiserver.go:419] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
	I0526 21:25:14.050580  527485 command_runner.go:124] ! W0526 21:23:32.498634       1 genericapiserver.go:419] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
	I0526 21:25:14.050594  527485 command_runner.go:124] ! W0526 21:23:32.503834       1 genericapiserver.go:419] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
	I0526 21:25:14.050612  527485 command_runner.go:124] ! W0526 21:23:32.506839       1 genericapiserver.go:419] Skipping API flowcontrol.apiserver.k8s.io/v1alpha1 because it has no resources.
	I0526 21:25:14.050626  527485 command_runner.go:124] ! W0526 21:23:32.511920       1 genericapiserver.go:419] Skipping API apps/v1beta2 because it has no resources.
	I0526 21:25:14.050639  527485 command_runner.go:124] ! W0526 21:23:32.512155       1 genericapiserver.go:419] Skipping API apps/v1beta1 because it has no resources.
	I0526 21:25:14.050671  527485 command_runner.go:124] ! I0526 21:23:32.520325       1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0526 21:25:14.050702  527485 command_runner.go:124] ! I0526 21:23:32.520699       1 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
	I0526 21:25:14.050717  527485 command_runner.go:124] ! I0526 21:23:32.522294       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.050733  527485 command_runner.go:124] ! I0526 21:23:32.522675       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.050747  527485 command_runner.go:124] ! I0526 21:23:32.531035       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:14.050762  527485 command_runner.go:124] ! I0526 21:23:32.531144       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:14.050778  527485 command_runner.go:124] ! I0526 21:23:34.690784       1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
	I0526 21:25:14.050792  527485 command_runner.go:124] ! I0526 21:23:34.691285       1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	I0526 21:25:14.050809  527485 command_runner.go:124] ! I0526 21:23:34.692130       1 dynamic_serving_content.go:130] Starting serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key
	I0526 21:25:14.050822  527485 command_runner.go:124] ! I0526 21:23:34.692740       1 secure_serving.go:197] Serving securely on [::]:8443
	I0526 21:25:14.050835  527485 command_runner.go:124] ! I0526 21:23:34.693343       1 apf_controller.go:261] Starting API Priority and Fairness config controller
	I0526 21:25:14.050847  527485 command_runner.go:124] ! I0526 21:23:34.693677       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I0526 21:25:14.050864  527485 command_runner.go:124] ! I0526 21:23:34.694744       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
	I0526 21:25:14.050881  527485 command_runner.go:124] ! I0526 21:23:34.694836       1 shared_informer.go:240] Waiting for caches to sync for cluster_authentication_trust_controller
	I0526 21:25:14.050899  527485 command_runner.go:124] ! I0526 21:23:34.694880       1 available_controller.go:475] Starting AvailableConditionController
	I0526 21:25:14.050916  527485 command_runner.go:124] ! I0526 21:23:34.694885       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
	I0526 21:25:14.050928  527485 command_runner.go:124] ! I0526 21:23:34.694904       1 autoregister_controller.go:141] Starting autoregister controller
	I0526 21:25:14.050941  527485 command_runner.go:124] ! I0526 21:23:34.694908       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0526 21:25:14.050953  527485 command_runner.go:124] ! I0526 21:23:34.696887       1 apiservice_controller.go:97] Starting APIServiceRegistrationController
	I0526 21:25:14.050967  527485 command_runner.go:124] ! I0526 21:23:34.697053       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I0526 21:25:14.050986  527485 command_runner.go:124] ! I0526 21:23:34.697670       1 dynamic_serving_content.go:130] Starting aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key
	I0526 21:25:14.051000  527485 command_runner.go:124] ! I0526 21:23:34.697935       1 controller.go:83] Starting OpenAPI AggregationController
	I0526 21:25:14.051013  527485 command_runner.go:124] ! I0526 21:23:34.698627       1 customresource_discovery_controller.go:209] Starting DiscoveryController
	I0526 21:25:14.051026  527485 command_runner.go:124] ! I0526 21:23:34.705120       1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	I0526 21:25:14.051042  527485 command_runner.go:124] ! I0526 21:23:34.705289       1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
	I0526 21:25:14.051057  527485 command_runner.go:124] ! I0526 21:23:34.706119       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0526 21:25:14.051068  527485 command_runner.go:124] ! I0526 21:23:34.706246       1 shared_informer.go:240] Waiting for caches to sync for crd-autoregister
	I0526 21:25:14.051089  527485 command_runner.go:124] ! E0526 21:23:34.733148       1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.39.229, ResourceVersion: 0, AdditionalErrorMsg: 
	I0526 21:25:14.051103  527485 command_runner.go:124] ! I0526 21:23:34.762565       1 controller.go:86] Starting OpenAPI controller
	I0526 21:25:14.051118  527485 command_runner.go:124] ! I0526 21:23:34.762983       1 naming_controller.go:291] Starting NamingConditionController
	I0526 21:25:14.051130  527485 command_runner.go:124] ! I0526 21:23:34.763230       1 establishing_controller.go:76] Starting EstablishingController
	I0526 21:25:14.051143  527485 command_runner.go:124] ! I0526 21:23:34.763815       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0526 21:25:14.051157  527485 command_runner.go:124] ! I0526 21:23:34.764676       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0526 21:25:14.051167  527485 command_runner.go:124] ! I0526 21:23:34.765003       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0526 21:25:14.051181  527485 command_runner.go:124] ! I0526 21:23:34.894833       1 shared_informer.go:247] Caches are synced for node_authorizer 
	I0526 21:25:14.051193  527485 command_runner.go:124] ! I0526 21:23:34.895159       1 cache.go:39] Caches are synced for autoregister controller
	I0526 21:25:14.051206  527485 command_runner.go:124] ! I0526 21:23:34.895543       1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller 
	I0526 21:25:14.051221  527485 command_runner.go:124] ! I0526 21:23:34.895893       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0526 21:25:14.051234  527485 command_runner.go:124] ! I0526 21:23:34.897085       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0526 21:25:14.051246  527485 command_runner.go:124] ! I0526 21:23:34.899871       1 apf_controller.go:266] Running API Priority and Fairness config worker
	I0526 21:25:14.051258  527485 command_runner.go:124] ! I0526 21:23:34.907242       1 shared_informer.go:247] Caches are synced for crd-autoregister 
	I0526 21:25:14.051272  527485 command_runner.go:124] ! I0526 21:23:35.022751       1 controller.go:609] quota admission added evaluator for: namespaces
	I0526 21:25:14.051292  527485 command_runner.go:124] ! I0526 21:23:35.690855       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0526 21:25:14.051314  527485 command_runner.go:124] ! I0526 21:23:35.691097       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0526 21:25:14.051332  527485 command_runner.go:124] ! I0526 21:23:35.708402       1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000
	I0526 21:25:14.051347  527485 command_runner.go:124] ! I0526 21:23:35.726885       1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
	I0526 21:25:14.051362  527485 command_runner.go:124] ! I0526 21:23:35.727088       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I0526 21:25:14.051379  527485 command_runner.go:124] ! I0526 21:23:36.334571       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0526 21:25:14.051396  527485 command_runner.go:124] ! I0526 21:23:36.389004       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0526 21:25:14.051410  527485 command_runner.go:124] ! W0526 21:23:36.485873       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.39.229]
	I0526 21:25:14.051426  527485 command_runner.go:124] ! I0526 21:23:36.487435       1 controller.go:609] quota admission added evaluator for: endpoints
	I0526 21:25:14.051441  527485 command_runner.go:124] ! I0526 21:23:36.499209       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0526 21:25:14.051454  527485 command_runner.go:124] ! I0526 21:23:37.294654       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I0526 21:25:14.051469  527485 command_runner.go:124] ! I0526 21:23:38.382157       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I0526 21:25:14.051484  527485 command_runner.go:124] ! I0526 21:23:38.454712       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I0526 21:25:14.051499  527485 command_runner.go:124] ! I0526 21:23:43.955877       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I0526 21:25:14.051512  527485 command_runner.go:124] ! I0526 21:23:53.285833       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I0526 21:25:14.051524  527485 command_runner.go:124] ! I0526 21:23:53.338274       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I0526 21:25:14.051539  527485 command_runner.go:124] ! I0526 21:24:01.973387       1 client.go:360] parsed scheme: "passthrough"
	I0526 21:25:14.051555  527485 command_runner.go:124] ! I0526 21:24:01.973608       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0526 21:25:14.051569  527485 command_runner.go:124] ! I0526 21:24:01.973627       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0526 21:25:14.051579  527485 command_runner.go:124] ! I0526 21:24:43.497572       1 client.go:360] parsed scheme: "passthrough"
	I0526 21:25:14.051595  527485 command_runner.go:124] ! I0526 21:24:43.497775       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0526 21:25:14.051608  527485 command_runner.go:124] ! I0526 21:24:43.498072       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0526 21:25:14.061823  527485 logs.go:123] Gathering logs for coredns [a9593dff4428d4d0f5cc31832cefb42bf39b62a98d32d295dd9502a1ef6d307a] ...
	I0526 21:25:14.061838  527485 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 a9593dff4428d4d0f5cc31832cefb42bf39b62a98d32d295dd9502a1ef6d307a"
	I0526 21:25:14.082593  527485 command_runner.go:124] > .:53
	I0526 21:25:14.082607  527485 command_runner.go:124] > [INFO] plugin/reload: Running configuration MD5 = 8f51b271a18f2ce6fcaee5f1cfda3ed0
	I0526 21:25:14.082611  527485 command_runner.go:124] > CoreDNS-1.7.0
	I0526 21:25:14.082617  527485 command_runner.go:124] > linux/amd64, go1.14.4, f59c03d
	I0526 21:25:14.082860  527485 logs.go:123] Gathering logs for kube-scheduler [e6bb9bee7539aa8f41eb665706dd077c17b8c28a54090cf4fffe439978194c08] ...
	I0526 21:25:14.082878  527485 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 e6bb9bee7539aa8f41eb665706dd077c17b8c28a54090cf4fffe439978194c08"
	I0526 21:25:14.120114  527485 command_runner.go:124] ! I0526 21:23:31.228401       1 serving.go:331] Generated self-signed cert in-memory
	I0526 21:25:14.121435  527485 command_runner.go:124] ! W0526 21:23:34.792981       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0526 21:25:14.121461  527485 command_runner.go:124] ! W0526 21:23:34.795544       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0526 21:25:14.121479  527485 command_runner.go:124] ! W0526 21:23:34.796410       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0526 21:25:14.121502  527485 command_runner.go:124] ! W0526 21:23:34.796897       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0526 21:25:14.121530  527485 command_runner.go:124] ! I0526 21:23:34.861412       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0526 21:25:14.121583  527485 command_runner.go:124] ! I0526 21:23:34.862415       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0526 21:25:14.121599  527485 command_runner.go:124] ! I0526 21:23:34.861578       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0526 21:25:14.121611  527485 command_runner.go:124] ! I0526 21:23:34.861594       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I0526 21:25:14.121647  527485 command_runner.go:124] ! E0526 21:23:34.865256       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0526 21:25:14.121686  527485 command_runner.go:124] ! E0526 21:23:34.871182       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0526 21:25:14.121715  527485 command_runner.go:124] ! E0526 21:23:34.871367       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0526 21:25:14.121745  527485 command_runner.go:124] ! E0526 21:23:34.871423       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0526 21:25:14.121778  527485 command_runner.go:124] ! E0526 21:23:34.873602       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0526 21:25:14.121806  527485 command_runner.go:124] ! E0526 21:23:34.873877       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0526 21:25:14.121855  527485 command_runner.go:124] ! E0526 21:23:34.874313       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0526 21:25:14.121887  527485 command_runner.go:124] ! E0526 21:23:34.874540       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0526 21:25:14.121920  527485 command_runner.go:124] ! E0526 21:23:34.875162       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0526 21:25:14.121947  527485 command_runner.go:124] ! E0526 21:23:34.875282       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0526 21:25:14.121978  527485 command_runner.go:124] ! E0526 21:23:34.878224       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0526 21:25:14.122004  527485 command_runner.go:124] ! E0526 21:23:34.878386       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0526 21:25:14.122032  527485 command_runner.go:124] ! E0526 21:23:35.699206       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0526 21:25:14.122059  527485 command_runner.go:124] ! E0526 21:23:35.756603       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0526 21:25:14.122089  527485 command_runner.go:124] ! E0526 21:23:35.804897       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0526 21:25:14.122120  527485 command_runner.go:124] ! E0526 21:23:35.812802       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0526 21:25:14.122153  527485 command_runner.go:124] ! E0526 21:23:35.981887       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0526 21:25:14.122183  527485 command_runner.go:124] ! E0526 21:23:36.079577       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0526 21:25:14.122202  527485 command_runner.go:124] ! I0526 21:23:38.862952       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	I0526 21:25:14.125846  527485 logs.go:123] Gathering logs for kube-proxy [de6efc6fec4b20903efb849817a9e57e284841a9fc6ef9bd1b83e563316cdaa2] ...
	I0526 21:25:14.125864  527485 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 de6efc6fec4b20903efb849817a9e57e284841a9fc6ef9bd1b83e563316cdaa2"
	I0526 21:25:14.145591  527485 command_runner.go:124] ! I0526 21:23:54.629702       1 node.go:172] Successfully retrieved node IP: 192.168.39.229
	I0526 21:25:14.145674  527485 command_runner.go:124] ! I0526 21:23:54.629813       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.39.229), assume IPv4 operation
	I0526 21:25:14.145982  527485 command_runner.go:124] ! W0526 21:23:54.677087       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0526 21:25:14.146070  527485 command_runner.go:124] ! I0526 21:23:54.677377       1 server_others.go:185] Using iptables Proxier.
	I0526 21:25:14.146618  527485 command_runner.go:124] ! I0526 21:23:54.678139       1 server.go:650] Version: v1.20.2
	I0526 21:25:14.147379  527485 command_runner.go:124] ! I0526 21:23:54.678560       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
	I0526 21:25:14.147445  527485 command_runner.go:124] ! I0526 21:23:54.678810       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
	I0526 21:25:14.147710  527485 command_runner.go:124] ! I0526 21:23:54.680271       1 config.go:315] Starting service config controller
	I0526 21:25:14.148076  527485 command_runner.go:124] ! I0526 21:23:54.680366       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0526 21:25:14.148153  527485 command_runner.go:124] ! I0526 21:23:54.680391       1 config.go:224] Starting endpoint slice config controller
	I0526 21:25:14.148364  527485 command_runner.go:124] ! I0526 21:23:54.680396       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0526 21:25:14.148436  527485 command_runner.go:124] ! I0526 21:23:54.780835       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0526 21:25:14.149102  527485 command_runner.go:124] ! I0526 21:23:54.780955       1 shared_informer.go:247] Caches are synced for service config 
	I0526 21:25:14.150528  527485 logs.go:123] Gathering logs for kube-controller-manager [2314e41b1b44395e7b5a7c32d6859941d4a5db601171f0dd18ddefb571d25f18] ...
	I0526 21:25:14.150542  527485 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 2314e41b1b44395e7b5a7c32d6859941d4a5db601171f0dd18ddefb571d25f18"
	I0526 21:25:14.176605  527485 command_runner.go:124] ! Flag --port has been deprecated, see --secure-port instead.
	I0526 21:25:14.176621  527485 command_runner.go:124] ! I0526 21:23:30.770698       1 serving.go:331] Generated self-signed cert in-memory
	I0526 21:25:14.176628  527485 command_runner.go:124] ! I0526 21:23:31.105740       1 controllermanager.go:176] Version: v1.20.2
	I0526 21:25:14.176641  527485 command_runner.go:124] ! I0526 21:23:31.110528       1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
	I0526 21:25:14.176653  527485 command_runner.go:124] ! I0526 21:23:31.110685       1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	I0526 21:25:14.176665  527485 command_runner.go:124] ! I0526 21:23:31.111406       1 secure_serving.go:197] Serving securely on 127.0.0.1:10257
	I0526 21:25:14.176681  527485 command_runner.go:124] ! I0526 21:23:31.111685       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I0526 21:25:14.176689  527485 command_runner.go:124] ! I0526 21:23:37.283320       1 shared_informer.go:240] Waiting for caches to sync for tokens
	I0526 21:25:14.176697  527485 command_runner.go:124] ! I0526 21:23:37.384858       1 shared_informer.go:247] Caches are synced for tokens 
	I0526 21:25:14.176704  527485 command_runner.go:124] ! I0526 21:23:37.398260       1 controllermanager.go:554] Started "csrcleaner"
	I0526 21:25:14.176712  527485 command_runner.go:124] ! I0526 21:23:37.398681       1 cleaner.go:82] Starting CSR cleaner controller
	I0526 21:25:14.176719  527485 command_runner.go:124] ! I0526 21:23:37.436326       1 controllermanager.go:554] Started "tokencleaner"
	I0526 21:25:14.176727  527485 command_runner.go:124] ! I0526 21:23:37.436948       1 tokencleaner.go:118] Starting token cleaner controller
	I0526 21:25:14.176735  527485 command_runner.go:124] ! I0526 21:23:37.437051       1 shared_informer.go:240] Waiting for caches to sync for token_cleaner
	I0526 21:25:14.176749  527485 command_runner.go:124] ! I0526 21:23:37.437060       1 shared_informer.go:247] Caches are synced for token_cleaner 
	I0526 21:25:14.176768  527485 command_runner.go:124] ! E0526 21:23:37.458692       1 core.go:92] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail
	I0526 21:25:14.176782  527485 command_runner.go:124] ! W0526 21:23:37.458788       1 controllermanager.go:546] Skipping "service"
	I0526 21:25:14.176795  527485 command_runner.go:124] ! I0526 21:23:37.485897       1 controllermanager.go:554] Started "root-ca-cert-publisher"
	I0526 21:25:14.176807  527485 command_runner.go:124] ! W0526 21:23:37.486148       1 controllermanager.go:546] Skipping "ephemeral-volume"
	I0526 21:25:14.176825  527485 command_runner.go:124] ! I0526 21:23:37.486971       1 publisher.go:98] Starting root CA certificate configmap publisher
	I0526 21:25:14.176840  527485 command_runner.go:124] ! I0526 21:23:37.487325       1 shared_informer.go:240] Waiting for caches to sync for crt configmap
	I0526 21:25:14.176854  527485 command_runner.go:124] ! I0526 21:23:37.514186       1 controllermanager.go:554] Started "endpointslicemirroring"
	I0526 21:25:14.176885  527485 command_runner.go:124] ! I0526 21:23:37.515190       1 endpointslicemirroring_controller.go:211] Starting EndpointSliceMirroring controller
	I0526 21:25:14.176898  527485 command_runner.go:124] ! I0526 21:23:37.515570       1 shared_informer.go:240] Waiting for caches to sync for endpoint_slice_mirroring
	I0526 21:25:14.176905  527485 command_runner.go:124] ! I0526 21:23:37.550580       1 controllermanager.go:554] Started "replicaset"
	I0526 21:25:14.176913  527485 command_runner.go:124] ! I0526 21:23:37.551218       1 replica_set.go:182] Starting replicaset controller
	I0526 21:25:14.176925  527485 command_runner.go:124] ! I0526 21:23:37.551414       1 shared_informer.go:240] Waiting for caches to sync for ReplicaSet
	I0526 21:25:14.176937  527485 command_runner.go:124] ! I0526 21:23:37.987267       1 controllermanager.go:554] Started "horizontalpodautoscaling"
	I0526 21:25:14.176950  527485 command_runner.go:124] ! I0526 21:23:37.988181       1 horizontal.go:169] Starting HPA controller
	I0526 21:25:14.176961  527485 command_runner.go:124] ! I0526 21:23:37.988418       1 shared_informer.go:240] Waiting for caches to sync for HPA
	I0526 21:25:14.176976  527485 command_runner.go:124] ! I0526 21:23:38.238507       1 controllermanager.go:554] Started "persistentvolume-binder"
	I0526 21:25:14.176990  527485 command_runner.go:124] ! I0526 21:23:38.238941       1 pv_controller_base.go:307] Starting persistent volume controller
	I0526 21:25:14.177004  527485 command_runner.go:124] ! I0526 21:23:38.238953       1 shared_informer.go:240] Waiting for caches to sync for persistent volume
	I0526 21:25:14.177013  527485 command_runner.go:124] ! I0526 21:23:38.636899       1 controllermanager.go:554] Started "garbagecollector"
	I0526 21:25:14.177021  527485 command_runner.go:124] ! I0526 21:23:38.636902       1 garbagecollector.go:142] Starting garbage collector controller
	I0526 21:25:14.177034  527485 command_runner.go:124] ! I0526 21:23:38.636960       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	I0526 21:25:14.177047  527485 command_runner.go:124] ! I0526 21:23:38.637525       1 graph_builder.go:289] GraphBuilder running
	I0526 21:25:14.177057  527485 command_runner.go:124] ! I0526 21:23:39.037283       1 controllermanager.go:554] Started "disruption"
	I0526 21:25:14.177071  527485 command_runner.go:124] ! I0526 21:23:39.037574       1 disruption.go:331] Starting disruption controller
	I0526 21:25:14.177083  527485 command_runner.go:124] ! I0526 21:23:39.037585       1 shared_informer.go:240] Waiting for caches to sync for disruption
	I0526 21:25:14.177097  527485 command_runner.go:124] ! I0526 21:23:39.286540       1 controllermanager.go:554] Started "clusterrole-aggregation"
	I0526 21:25:14.177111  527485 command_runner.go:124] ! I0526 21:23:39.286598       1 clusterroleaggregation_controller.go:149] Starting ClusterRoleAggregator
	I0526 21:25:14.177122  527485 command_runner.go:124] ! I0526 21:23:39.286605       1 shared_informer.go:240] Waiting for caches to sync for ClusterRoleAggregator
	I0526 21:25:14.177132  527485 command_runner.go:124] ! I0526 21:23:39.537304       1 controllermanager.go:554] Started "pvc-protection"
	I0526 21:25:14.177151  527485 command_runner.go:124] ! I0526 21:23:39.537579       1 pvc_protection_controller.go:110] Starting PVC protection controller
	I0526 21:25:14.177169  527485 command_runner.go:124] ! I0526 21:23:39.537670       1 shared_informer.go:240] Waiting for caches to sync for PVC protection
	I0526 21:25:14.177185  527485 command_runner.go:124] ! I0526 21:23:39.786982       1 controllermanager.go:554] Started "pv-protection"
	I0526 21:25:14.177200  527485 command_runner.go:124] ! I0526 21:23:39.787110       1 pv_protection_controller.go:83] Starting PV protection controller
	I0526 21:25:14.177212  527485 command_runner.go:124] ! I0526 21:23:39.787118       1 shared_informer.go:240] Waiting for caches to sync for PV protection
	I0526 21:25:14.177222  527485 command_runner.go:124] ! I0526 21:23:40.036383       1 controllermanager.go:554] Started "endpoint"
	I0526 21:25:14.177235  527485 command_runner.go:124] ! I0526 21:23:40.036415       1 endpoints_controller.go:184] Starting endpoint controller
	I0526 21:25:14.177250  527485 command_runner.go:124] ! I0526 21:23:40.037058       1 shared_informer.go:240] Waiting for caches to sync for endpoint
	I0526 21:25:14.177263  527485 command_runner.go:124] ! I0526 21:23:40.288607       1 controllermanager.go:554] Started "podgc"
	I0526 21:25:14.177276  527485 command_runner.go:124] ! I0526 21:23:40.288827       1 gc_controller.go:89] Starting GC controller
	I0526 21:25:14.177289  527485 command_runner.go:124] ! I0526 21:23:40.289411       1 shared_informer.go:240] Waiting for caches to sync for GC
	I0526 21:25:14.177307  527485 command_runner.go:124] ! W0526 21:23:40.988861       1 shared_informer.go:494] resyncPeriod 13h30m7.5724073s is smaller than resyncCheckPeriod 19h40m47.70464655s and the informer has already started. Changing it to 19h40m47.70464655s
	I0526 21:25:14.177323  527485 command_runner.go:124] ! I0526 21:23:40.989960       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for serviceaccounts
	I0526 21:25:14.177394  527485 command_runner.go:124] ! I0526 21:23:40.990215       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for daemonsets.apps
	I0526 21:25:14.177421  527485 command_runner.go:124] ! I0526 21:23:40.990426       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for cronjobs.batch
	I0526 21:25:14.177437  527485 command_runner.go:124] ! I0526 21:23:40.990971       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for rolebindings.rbac.authorization.k8s.io
	I0526 21:25:14.177455  527485 command_runner.go:124] ! I0526 21:23:40.991569       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for horizontalpodautoscalers.autoscaling
	I0526 21:25:14.177473  527485 command_runner.go:124] ! I0526 21:23:40.991963       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for poddisruptionbudgets.policy
	I0526 21:25:14.177489  527485 command_runner.go:124] ! I0526 21:23:40.992141       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for jobs.batch
	I0526 21:25:14.177506  527485 command_runner.go:124] ! I0526 21:23:40.992301       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for endpointslices.discovery.k8s.io
	I0526 21:25:14.177526  527485 command_runner.go:124] ! I0526 21:23:40.992532       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for podtemplates
	I0526 21:25:14.177544  527485 command_runner.go:124] ! W0526 21:23:40.992690       1 shared_informer.go:494] resyncPeriod 13h37m25.694603534s is smaller than resyncCheckPeriod 19h40m47.70464655s and the informer has already started. Changing it to 19h40m47.70464655s
	I0526 21:25:14.177565  527485 command_runner.go:124] ! I0526 21:23:40.993075       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for controllerrevisions.apps
	I0526 21:25:14.177585  527485 command_runner.go:124] ! I0526 21:23:40.993243       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for networkpolicies.networking.k8s.io
	I0526 21:25:14.177601  527485 command_runner.go:124] ! I0526 21:23:40.993580       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for limitranges
	I0526 21:25:14.177618  527485 command_runner.go:124] ! I0526 21:23:40.993747       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for ingresses.networking.k8s.io
	I0526 21:25:14.177633  527485 command_runner.go:124] ! I0526 21:23:40.993780       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for ingresses.extensions
	I0526 21:25:14.177649  527485 command_runner.go:124] ! I0526 21:23:40.993805       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for leases.coordination.k8s.io
	I0526 21:25:14.177666  527485 command_runner.go:124] ! I0526 21:23:40.993841       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for statefulsets.apps
	I0526 21:25:14.177684  527485 command_runner.go:124] ! I0526 21:23:40.993861       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for replicasets.apps
	I0526 21:25:14.177704  527485 command_runner.go:124] ! I0526 21:23:40.993876       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for deployments.apps
	I0526 21:25:14.177720  527485 command_runner.go:124] ! I0526 21:23:40.993891       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for endpoints
	I0526 21:25:14.177733  527485 command_runner.go:124] ! I0526 21:23:40.993951       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for events.events.k8s.io
	I0526 21:25:14.177746  527485 command_runner.go:124] ! I0526 21:23:40.993980       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for roles.rbac.authorization.k8s.io
	I0526 21:25:14.177764  527485 command_runner.go:124] ! I0526 21:23:40.994082       1 controllermanager.go:554] Started "resourcequota"
	I0526 21:25:14.177792  527485 command_runner.go:124] ! I0526 21:23:40.994178       1 resource_quota_controller.go:273] Starting resource quota controller
	I0526 21:25:14.177807  527485 command_runner.go:124] ! I0526 21:23:40.994191       1 shared_informer.go:240] Waiting for caches to sync for resource quota
	I0526 21:25:14.177820  527485 command_runner.go:124] ! I0526 21:23:40.994219       1 resource_quota_monitor.go:304] QuotaMonitor running
	I0526 21:25:14.177829  527485 command_runner.go:124] ! I0526 21:23:41.028175       1 controllermanager.go:554] Started "namespace"
	I0526 21:25:14.177838  527485 command_runner.go:124] ! I0526 21:23:41.028716       1 namespace_controller.go:200] Starting namespace controller
	I0526 21:25:14.177853  527485 command_runner.go:124] ! I0526 21:23:41.028992       1 shared_informer.go:240] Waiting for caches to sync for namespace
	I0526 21:25:14.177863  527485 command_runner.go:124] ! I0526 21:23:41.051981       1 controllermanager.go:554] Started "ttl"
	I0526 21:25:14.177874  527485 command_runner.go:124] ! I0526 21:23:41.052926       1 ttl_controller.go:121] Starting TTL controller
	I0526 21:25:14.177886  527485 command_runner.go:124] ! I0526 21:23:41.053383       1 shared_informer.go:240] Waiting for caches to sync for TTL
	I0526 21:25:14.177899  527485 command_runner.go:124] ! I0526 21:23:41.289145       1 controllermanager.go:554] Started "attachdetach"
	I0526 21:25:14.177911  527485 command_runner.go:124] ! W0526 21:23:41.289246       1 controllermanager.go:546] Skipping "ttl-after-finished"
	I0526 21:25:14.177921  527485 command_runner.go:124] ! I0526 21:23:41.289282       1 attach_detach_controller.go:328] Starting attach detach controller
	I0526 21:25:14.177933  527485 command_runner.go:124] ! I0526 21:23:41.289291       1 shared_informer.go:240] Waiting for caches to sync for attach detach
	I0526 21:25:14.177946  527485 command_runner.go:124] ! I0526 21:23:41.537362       1 controllermanager.go:554] Started "serviceaccount"
	I0526 21:25:14.177958  527485 command_runner.go:124] ! I0526 21:23:41.537403       1 serviceaccounts_controller.go:117] Starting service account controller
	I0526 21:25:14.177973  527485 command_runner.go:124] ! I0526 21:23:41.538137       1 shared_informer.go:240] Waiting for caches to sync for service account
	I0526 21:25:14.177985  527485 command_runner.go:124] ! I0526 21:23:41.787243       1 controllermanager.go:554] Started "deployment"
	I0526 21:25:14.177997  527485 command_runner.go:124] ! I0526 21:23:41.788023       1 deployment_controller.go:153] Starting deployment controller
	I0526 21:25:14.178011  527485 command_runner.go:124] ! I0526 21:23:41.790417       1 shared_informer.go:240] Waiting for caches to sync for deployment
	I0526 21:25:14.178022  527485 command_runner.go:124] ! I0526 21:23:41.936235       1 controllermanager.go:554] Started "csrapproving"
	I0526 21:25:14.178036  527485 command_runner.go:124] ! I0526 21:23:41.936293       1 certificate_controller.go:118] Starting certificate controller "csrapproving"
	I0526 21:25:14.178051  527485 command_runner.go:124] ! I0526 21:23:41.936301       1 shared_informer.go:240] Waiting for caches to sync for certificate-csrapproving
	I0526 21:25:14.178070  527485 command_runner.go:124] ! I0526 21:23:42.137381       1 request.go:655] Throttling request took 1.048213324s, request: GET:https://192.168.39.229:8443/apis/extensions/v1beta1?timeout=32s
	I0526 21:25:14.178084  527485 command_runner.go:124] ! I0526 21:23:42.189224       1 node_ipam_controller.go:91] Sending events to api server.
	I0526 21:25:14.178099  527485 command_runner.go:124] ! I0526 21:23:52.210125       1 range_allocator.go:82] Sending events to api server.
	I0526 21:25:14.178115  527485 command_runner.go:124] ! I0526 21:23:52.211056       1 range_allocator.go:116] No Secondary Service CIDR provided. Skipping filtering out secondary service addresses.
	I0526 21:25:14.178125  527485 command_runner.go:124] ! I0526 21:23:52.211333       1 controllermanager.go:554] Started "nodeipam"
	I0526 21:25:14.178141  527485 command_runner.go:124] ! W0526 21:23:52.211708       1 core.go:246] configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes.
	I0526 21:25:14.178154  527485 command_runner.go:124] ! W0526 21:23:52.212021       1 controllermanager.go:546] Skipping "route"
	I0526 21:25:14.178167  527485 command_runner.go:124] ! I0526 21:23:52.212292       1 node_ipam_controller.go:159] Starting ipam controller
	I0526 21:25:14.178182  527485 command_runner.go:124] ! I0526 21:23:52.212876       1 shared_informer.go:240] Waiting for caches to sync for node
	I0526 21:25:14.178196  527485 command_runner.go:124] ! I0526 21:23:52.227871       1 node_lifecycle_controller.go:77] Sending events to api server
	I0526 21:25:14.178212  527485 command_runner.go:124] ! E0526 21:23:52.227991       1 core.go:232] failed to start cloud node lifecycle controller: no cloud provider provided
	I0526 21:25:14.178222  527485 command_runner.go:124] ! W0526 21:23:52.228003       1 controllermanager.go:546] Skipping "cloud-node-lifecycle"
	I0526 21:25:14.178235  527485 command_runner.go:124] ! I0526 21:23:52.257128       1 controllermanager.go:554] Started "persistentvolume-expander"
	I0526 21:25:14.178249  527485 command_runner.go:124] ! I0526 21:23:52.257967       1 expand_controller.go:310] Starting expand controller
	I0526 21:25:14.178263  527485 command_runner.go:124] ! I0526 21:23:52.258344       1 shared_informer.go:240] Waiting for caches to sync for expand
	I0526 21:25:14.178277  527485 command_runner.go:124] ! I0526 21:23:52.287731       1 controllermanager.go:554] Started "endpointslice"
	I0526 21:25:14.178293  527485 command_runner.go:124] ! I0526 21:23:52.287941       1 endpointslice_controller.go:237] Starting endpoint slice controller
	I0526 21:25:14.178308  527485 command_runner.go:124] ! I0526 21:23:52.287950       1 shared_informer.go:240] Waiting for caches to sync for endpoint_slice
	I0526 21:25:14.178318  527485 command_runner.go:124] ! I0526 21:23:52.334629       1 controllermanager.go:554] Started "daemonset"
	I0526 21:25:14.178329  527485 command_runner.go:124] ! I0526 21:23:52.334789       1 daemon_controller.go:285] Starting daemon sets controller
	I0526 21:25:14.178344  527485 command_runner.go:124] ! I0526 21:23:52.334797       1 shared_informer.go:240] Waiting for caches to sync for daemon sets
	I0526 21:25:14.178361  527485 command_runner.go:124] ! I0526 21:23:52.366633       1 controllermanager.go:554] Started "statefulset"
	I0526 21:25:14.178375  527485 command_runner.go:124] ! I0526 21:23:52.366920       1 stateful_set.go:146] Starting stateful set controller
	I0526 21:25:14.178389  527485 command_runner.go:124] ! I0526 21:23:52.367009       1 shared_informer.go:240] Waiting for caches to sync for stateful set
	I0526 21:25:14.178402  527485 command_runner.go:124] ! I0526 21:23:52.395670       1 controllermanager.go:554] Started "cronjob"
	I0526 21:25:14.178413  527485 command_runner.go:124] ! I0526 21:23:52.395842       1 cronjob_controller.go:96] Starting CronJob Manager
	I0526 21:25:14.178425  527485 command_runner.go:124] ! I0526 21:23:52.416080       1 certificate_controller.go:118] Starting certificate controller "csrsigning-kubelet-serving"
	I0526 21:25:14.178441  527485 command_runner.go:124] ! I0526 21:23:52.416256       1 shared_informer.go:240] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0526 21:25:14.178460  527485 command_runner.go:124] ! I0526 21:23:52.416385       1 dynamic_serving_content.go:130] Starting csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key
	I0526 21:25:14.178476  527485 command_runner.go:124] ! I0526 21:23:52.416862       1 certificate_controller.go:118] Starting certificate controller "csrsigning-kubelet-client"
	I0526 21:25:14.178491  527485 command_runner.go:124] ! I0526 21:23:52.416958       1 shared_informer.go:240] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0526 21:25:14.178504  527485 command_runner.go:124] ! I0526 21:23:52.416975       1 dynamic_serving_content.go:130] Starting csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key
	I0526 21:25:14.178520  527485 command_runner.go:124] ! I0526 21:23:52.417715       1 certificate_controller.go:118] Starting certificate controller "csrsigning-kube-apiserver-client"
	I0526 21:25:14.178537  527485 command_runner.go:124] ! I0526 21:23:52.417882       1 shared_informer.go:240] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0526 21:25:14.178556  527485 command_runner.go:124] ! I0526 21:23:52.418025       1 dynamic_serving_content.go:130] Starting csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key
	I0526 21:25:14.178570  527485 command_runner.go:124] ! I0526 21:23:52.418373       1 controllermanager.go:554] Started "csrsigning"
	I0526 21:25:14.178585  527485 command_runner.go:124] ! I0526 21:23:52.418419       1 certificate_controller.go:118] Starting certificate controller "csrsigning-legacy-unknown"
	I0526 21:25:14.178601  527485 command_runner.go:124] ! I0526 21:23:52.418799       1 dynamic_serving_content.go:130] Starting csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key
	I0526 21:25:14.178615  527485 command_runner.go:124] ! I0526 21:23:52.418805       1 shared_informer.go:240] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0526 21:25:14.178628  527485 command_runner.go:124] ! I0526 21:23:52.515732       1 controllermanager.go:554] Started "bootstrapsigner"
	I0526 21:25:14.178643  527485 command_runner.go:124] ! I0526 21:23:52.516431       1 shared_informer.go:240] Waiting for caches to sync for bootstrap_signer
	I0526 21:25:14.178658  527485 command_runner.go:124] ! I0526 21:23:52.765741       1 controllermanager.go:554] Started "replicationcontroller"
	I0526 21:25:14.178672  527485 command_runner.go:124] ! I0526 21:23:52.765769       1 replica_set.go:182] Starting replicationcontroller controller
	I0526 21:25:14.178686  527485 command_runner.go:124] ! I0526 21:23:52.765867       1 shared_informer.go:240] Waiting for caches to sync for ReplicationController
	I0526 21:25:14.178696  527485 command_runner.go:124] ! I0526 21:23:52.915756       1 node_lifecycle_controller.go:380] Sending events to api server.
	I0526 21:25:14.178708  527485 command_runner.go:124] ! I0526 21:23:52.916150       1 taint_manager.go:163] Sending events to api server.
	I0526 21:25:14.178721  527485 command_runner.go:124] ! I0526 21:23:52.916342       1 node_lifecycle_controller.go:508] Controller will reconcile labels.
	I0526 21:25:14.178731  527485 command_runner.go:124] ! I0526 21:23:52.916386       1 controllermanager.go:554] Started "nodelifecycle"
	I0526 21:25:14.178745  527485 command_runner.go:124] ! I0526 21:23:52.916749       1 node_lifecycle_controller.go:542] Starting node controller
	I0526 21:25:14.178758  527485 command_runner.go:124] ! I0526 21:23:52.916921       1 shared_informer.go:240] Waiting for caches to sync for taint
	I0526 21:25:14.178769  527485 command_runner.go:124] ! I0526 21:23:53.165965       1 controllermanager.go:554] Started "job"
	I0526 21:25:14.178785  527485 command_runner.go:124] ! I0526 21:23:53.166025       1 job_controller.go:148] Starting job controller
	I0526 21:25:14.178793  527485 command_runner.go:124] ! I0526 21:23:53.167211       1 shared_informer.go:240] Waiting for caches to sync for job
	I0526 21:25:14.178804  527485 command_runner.go:124] ! I0526 21:23:53.170385       1 shared_informer.go:240] Waiting for caches to sync for resource quota
	I0526 21:25:14.178829  527485 command_runner.go:124] ! W0526 21:23:53.178965       1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-20210526212238-510955" does not exist
	I0526 21:25:14.178845  527485 command_runner.go:124] ! I0526 21:23:53.213010       1 shared_informer.go:247] Caches are synced for node 
	I0526 21:25:14.178861  527485 command_runner.go:124] ! I0526 21:23:53.213735       1 range_allocator.go:172] Starting range CIDR allocator
	I0526 21:25:14.178876  527485 command_runner.go:124] ! I0526 21:23:53.214071       1 shared_informer.go:240] Waiting for caches to sync for cidrallocator
	I0526 21:25:14.178887  527485 command_runner.go:124] ! I0526 21:23:53.214233       1 shared_informer.go:247] Caches are synced for cidrallocator 
	I0526 21:25:14.178899  527485 command_runner.go:124] ! I0526 21:23:53.215982       1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring 
	I0526 21:25:14.178915  527485 command_runner.go:124] ! I0526 21:23:53.216587       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-serving 
	I0526 21:25:14.178930  527485 command_runner.go:124] ! I0526 21:23:53.217085       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-client 
	I0526 21:25:14.178958  527485 command_runner.go:124] ! I0526 21:23:53.217522       1 shared_informer.go:247] Caches are synced for bootstrap_signer 
	I0526 21:25:14.178974  527485 command_runner.go:124] ! I0526 21:23:53.218215       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kube-apiserver-client 
	I0526 21:25:14.178985  527485 command_runner.go:124] ! I0526 21:23:53.218891       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-legacy-unknown 
	I0526 21:25:14.178998  527485 command_runner.go:124] ! I0526 21:23:53.229560       1 shared_informer.go:247] Caches are synced for namespace 
	I0526 21:25:14.179013  527485 command_runner.go:124] ! I0526 21:23:53.235029       1 shared_informer.go:247] Caches are synced for daemon sets 
	I0526 21:25:14.179025  527485 command_runner.go:124] ! I0526 21:23:53.238654       1 shared_informer.go:247] Caches are synced for service account 
	I0526 21:25:14.179039  527485 command_runner.go:124] ! I0526 21:23:53.240824       1 shared_informer.go:247] Caches are synced for endpoint 
	I0526 21:25:14.179054  527485 command_runner.go:124] ! I0526 21:23:53.247379       1 shared_informer.go:247] Caches are synced for certificate-csrapproving 
	I0526 21:25:14.179068  527485 command_runner.go:124] ! I0526 21:23:53.251558       1 shared_informer.go:247] Caches are synced for PVC protection 
	I0526 21:25:14.179079  527485 command_runner.go:124] ! I0526 21:23:53.252699       1 shared_informer.go:247] Caches are synced for ReplicaSet 
	I0526 21:25:14.179088  527485 command_runner.go:124] ! I0526 21:23:53.256544       1 shared_informer.go:247] Caches are synced for TTL 
	I0526 21:25:14.179104  527485 command_runner.go:124] ! I0526 21:23:53.265652       1 range_allocator.go:373] Set node multinode-20210526212238-510955 PodCIDR to [10.244.0.0/24]
	I0526 21:25:14.179117  527485 command_runner.go:124] ! I0526 21:23:53.268627       1 shared_informer.go:247] Caches are synced for job 
	I0526 21:25:14.179128  527485 command_runner.go:124] ! I0526 21:23:53.268752       1 shared_informer.go:247] Caches are synced for stateful set 
	I0526 21:25:14.179144  527485 command_runner.go:124] ! I0526 21:23:53.290037       1 shared_informer.go:247] Caches are synced for crt configmap 
	I0526 21:25:14.179159  527485 command_runner.go:124] ! I0526 21:23:53.290226       1 shared_informer.go:247] Caches are synced for endpoint_slice 
	I0526 21:25:14.179173  527485 command_runner.go:124] ! I0526 21:23:53.292847       1 shared_informer.go:247] Caches are synced for deployment 
	I0526 21:25:14.179184  527485 command_runner.go:124] ! I0526 21:23:53.293728       1 shared_informer.go:247] Caches are synced for GC 
	I0526 21:25:14.179196  527485 command_runner.go:124] ! I0526 21:23:53.293879       1 shared_informer.go:247] Caches are synced for HPA 
	I0526 21:25:14.179211  527485 command_runner.go:124] ! I0526 21:23:53.293974       1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator 
	I0526 21:25:14.179225  527485 command_runner.go:124] ! I0526 21:23:53.317816       1 shared_informer.go:247] Caches are synced for taint 
	I0526 21:25:14.179241  527485 command_runner.go:124] ! I0526 21:23:53.317927       1 node_lifecycle_controller.go:1429] Initializing eviction metric for zone: 
	I0526 21:25:14.179259  527485 command_runner.go:124] ! W0526 21:23:53.318278       1 node_lifecycle_controller.go:1044] Missing timestamp for Node multinode-20210526212238-510955. Assuming now as a timestamp.
	I0526 21:25:14.179273  527485 command_runner.go:124] ! I0526 21:23:53.318396       1 node_lifecycle_controller.go:1195] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
	I0526 21:25:14.179286  527485 command_runner.go:124] ! I0526 21:23:53.318775       1 taint_manager.go:187] Starting NoExecuteTaintManager
	I0526 21:25:14.179314  527485 command_runner.go:124] ! I0526 21:23:53.319750       1 event.go:291] "Event occurred" object="multinode-20210526212238-510955" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-20210526212238-510955 event: Registered Node multinode-20210526212238-510955 in Controller"
	I0526 21:25:14.179329  527485 command_runner.go:124] ! I0526 21:23:53.337883       1 shared_informer.go:247] Caches are synced for disruption 
	I0526 21:25:14.179342  527485 command_runner.go:124] ! I0526 21:23:53.337896       1 disruption.go:339] Sending events to api server.
	I0526 21:25:14.179356  527485 command_runner.go:124] ! I0526 21:23:53.368948       1 shared_informer.go:247] Caches are synced for ReplicationController 
	I0526 21:25:14.179372  527485 command_runner.go:124] ! I0526 21:23:53.431193       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-2wgbs"
	I0526 21:25:14.179399  527485 command_runner.go:124] ! I0526 21:23:53.431223       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-74ff55c5b to 2"
	I0526 21:25:14.179414  527485 command_runner.go:124] ! I0526 21:23:53.459736       1 shared_informer.go:247] Caches are synced for expand 
	I0526 21:25:14.179427  527485 command_runner.go:124] ! I0526 21:23:53.479631       1 shared_informer.go:247] Caches are synced for resource quota 
	I0526 21:25:14.179441  527485 command_runner.go:124] ! I0526 21:23:53.487838       1 shared_informer.go:247] Caches are synced for PV protection 
	I0526 21:25:14.179455  527485 command_runner.go:124] ! I0526 21:23:53.489356       1 shared_informer.go:247] Caches are synced for attach detach 
	I0526 21:25:14.179471  527485 command_runner.go:124] ! I0526 21:23:53.494672       1 shared_informer.go:247] Caches are synced for resource quota 
	I0526 21:25:14.179483  527485 command_runner.go:124] ! I0526 21:23:53.539359       1 shared_informer.go:247] Caches are synced for persistent volume 
	I0526 21:25:14.179501  527485 command_runner.go:124] ! I0526 21:23:53.545401       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-qbl42"
	I0526 21:25:14.179525  527485 command_runner.go:124] ! I0526 21:23:53.545422       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-z56bv"
	I0526 21:25:14.179548  527485 command_runner.go:124] ! I0526 21:23:53.556102       1 event.go:291] "Event occurred" object="kube-system/kube-apiserver-multinode-20210526212238-510955" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0526 21:25:14.179571  527485 command_runner.go:124] ! I0526 21:23:53.567036       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-tw67b"
	I0526 21:25:14.179590  527485 command_runner.go:124] ! E0526 21:23:53.635384       1 clusterroleaggregation_controller.go:181] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
	I0526 21:25:14.179607  527485 command_runner.go:124] ! I0526 21:23:53.689947       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	I0526 21:25:14.179630  527485 command_runner.go:124] ! I0526 21:23:53.733785       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-74ff55c5b to 1"
	I0526 21:25:14.179653  527485 command_runner.go:124] ! I0526 21:23:53.758013       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-74ff55c5b-z56bv"
	I0526 21:25:14.179668  527485 command_runner.go:124] ! I0526 21:23:53.906201       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0526 21:25:14.179683  527485 command_runner.go:124] ! I0526 21:23:53.937294       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0526 21:25:14.179695  527485 command_runner.go:124] ! I0526 21:23:53.937309       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0526 21:25:14.179714  527485 command_runner.go:124] ! I0526 21:24:08.320331       1 node_lifecycle_controller.go:1222] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I0526 21:25:14.189022  527485 logs.go:123] Gathering logs for containerd ...
	I0526 21:25:14.189041  527485 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0526 21:25:14.204280  527485 command_runner.go:124] > -- Logs begin at Wed 2021-05-26 21:22:49 UTC, end at Wed 2021-05-26 21:25:14 UTC. --
	I0526 21:25:14.204307  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 systemd[1]: Starting containerd container runtime...
	I0526 21:25:14.204320  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 systemd[1]: Started containerd container runtime.
	I0526 21:25:14.204341  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.412639957Z" level=info msg="starting containerd" revision=05f951a3781f4f2c1911b05e61c160e9c30eaa8e version=v1.4.4
	I0526 21:25:14.204368  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.454795725Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0526 21:25:14.204394  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.455022736Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0526 21:25:14.204432  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.456819758Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/4.19.182\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0526 21:25:14.204457  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.456940685Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0526 21:25:14.204492  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.457199432Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0526 21:25:14.204526  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.457299817Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0526 21:25:14.204552  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.457342626Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0526 21:25:14.204575  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.457353348Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0526 21:25:14.204599  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.457375564Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0526 21:25:14.204624  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.457518971Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0526 21:25:14.204658  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.457752665Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0526 21:25:14.204683  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.457768067Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0526 21:25:14.204709  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.457801760Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0526 21:25:14.204731  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.457811694Z" level=info msg="metadata content store policy set" policy=shared
	I0526 21:25:14.204760  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.461742670Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0526 21:25:14.204788  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.461851430Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0526 21:25:14.204816  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462036878Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0526 21:25:14.204840  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462069131Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0526 21:25:14.204885  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462082171Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0526 21:25:14.204912  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462094524Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0526 21:25:14.204930  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462115116Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0526 21:25:14.204947  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462127721Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1
	I0526 21:25:14.204963  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462139766Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0526 21:25:14.204981  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462157542Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0526 21:25:14.204997  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462167923Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0526 21:25:14.205015  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462295610Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0526 21:25:14.205033  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462357720Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0526 21:25:14.205049  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462745295Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0526 21:25:14.205064  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462770123Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0526 21:25:14.205079  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462815565Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0526 21:25:14.205095  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462827921Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0526 21:25:14.205109  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462846347Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0526 21:25:14.205126  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462857513Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0526 21:25:14.205141  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462870788Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0526 21:25:14.205158  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462881154Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0526 21:25:14.205191  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462892049Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0526 21:25:14.205207  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462903002Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0526 21:25:14.205222  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462913917Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0526 21:25:14.205239  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462958764Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0526 21:25:14.205255  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462972025Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0526 21:25:14.205275  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462983386Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0526 21:25:14.205290  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462994704Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1
	I0526 21:25:14.205308  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.463133131Z" level=warning msg="failed to load plugin io.containerd.grpc.v1.cri" error="invalid plugin config: `systemd_cgroup` only works for runtime io.containerd.runtime.v1.linux"
	I0526 21:25:14.205323  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.463145276Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0526 21:25:14.205337  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.463363744Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	I0526 21:25:14.205351  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.463401676Z" level=info msg=serving... address=/run/containerd/containerd.sock
	I0526 21:25:14.205364  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.463415404Z" level=info msg="containerd successfully booted in 0.052163s"
	I0526 21:25:14.205375  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 systemd[1]: Stopping containerd container runtime...
	I0526 21:25:14.205385  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 systemd[1]: containerd.service: Succeeded.
	I0526 21:25:14.205394  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 systemd[1]: Stopped containerd container runtime.
	I0526 21:25:14.205403  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 systemd[1]: Starting containerd container runtime...
	I0526 21:25:14.205412  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 systemd[1]: Started containerd container runtime.
	I0526 21:25:14.205425  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.677351233Z" level=info msg="starting containerd" revision=05f951a3781f4f2c1911b05e61c160e9c30eaa8e version=v1.4.4
	I0526 21:25:14.205440  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.703735354Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0526 21:25:14.205455  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.703939180Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0526 21:25:14.205478  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.706070962Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/4.19.182\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0526 21:25:14.205496  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.706222939Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0526 21:25:14.205519  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.706683734Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0526 21:25:14.205537  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.706837938Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0526 21:25:14.205554  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.706963959Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0526 21:25:14.205571  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.707081760Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0526 21:25:14.205586  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.707216688Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0526 21:25:14.205602  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.707381113Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0526 21:25:14.205624  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.707841019Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0526 21:25:14.205641  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.707973506Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0526 21:25:14.205656  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.708095816Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0526 21:25:14.205670  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.708236930Z" level=info msg="metadata content store policy set" policy=shared
	I0526 21:25:14.205685  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.708536776Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0526 21:25:14.205703  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.708698510Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0526 21:25:14.205727  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.708937323Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0526 21:25:14.205751  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.709074999Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0526 21:25:14.205774  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.709196994Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0526 21:25:14.205800  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.709315424Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0526 21:25:14.205822  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.709506686Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0526 21:25:14.205841  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.709629192Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1
	I0526 21:25:14.205875  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.709743913Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0526 21:25:14.205900  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.709857985Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0526 21:25:14.205924  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.709979410Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0526 21:25:14.205946  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.710125076Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0526 21:25:14.205970  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.710271949Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0526 21:25:14.205994  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.710830775Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0526 21:25:14.206016  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.710974791Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0526 21:25:14.206032  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.711117145Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0526 21:25:14.206047  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.711243334Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0526 21:25:14.206065  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.711363735Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0526 21:25:14.206080  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.711549081Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0526 21:25:14.206094  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.711666234Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0526 21:25:14.206109  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.711781506Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0526 21:25:14.206124  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.711895813Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0526 21:25:14.206139  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.712013139Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0526 21:25:14.206153  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.712131897Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0526 21:25:14.206168  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.712269473Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0526 21:25:14.206184  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.712503525Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0526 21:25:14.206200  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.712659007Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0526 21:25:14.206217  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.712779064Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1
	I0526 21:25:14.206236  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.712986218Z" level=warning msg="`default_runtime` is deprecated, please use `default_runtime_name` to reference the default configuration you have defined in `runtimes`"
	I0526 21:25:14.206335  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.713141331Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:default DefaultRuntime:{Type:io.containerd.runc.v2 Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:0xc000155fb0 PrivilegedWithoutHostDevices:false BaseRuntimeSpec:} UntrustedWorkloadRuntime:{Type: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:<nil> PrivilegedWithoutHostDevices:false BaseRuntimeSpec:} Runtimes:map[default:{Type:io.containerd.runc.v2 Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:0xc000155fb0 PrivilegedWithoutHostDevices:false BaseRuntimeSpec:} runc:{Type:io.containerd.runc.v2 Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:0xc00037b050 PrivilegedWithoutHostDevices:false BaseRuntimeSpec:}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpac
kedLayers:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.mk NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate:} Registry:{Mirrors:map[docker.io:{Endpoints:[https://registry-1.docker.io]}] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:} DisableTCPService:true StreamServerAddress: StreamServerPort:10010 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:k8s.gcr.io/pause:3.2 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true IgnoreImageDefinedVolumes:false} ContainerdRootDir:/mnt/vda1/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/mnt/vda1/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/r
un/containerd/io.containerd.grpc.v1.cri}"
	I0526 21:25:14.206351  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.713322225Z" level=info msg="Connect containerd service"
	I0526 21:25:14.206370  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.713538361Z" level=info msg="Get image filesystem path \"/mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\""
	I0526 21:25:14.206403  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.714213931Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.mk: cni plugin not initialized: failed to load cni config"
	I0526 21:25:14.206429  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.714359921Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0526 21:25:14.206452  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.714868242Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	I0526 21:25:14.206474  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.715023618Z" level=info msg=serving... address=/run/containerd/containerd.sock
	I0526 21:25:14.206497  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.715142631Z" level=info msg="containerd successfully booted in 0.038760s"
	I0526 21:25:14.206519  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.726087774Z" level=info msg="Start subscribing containerd event"
	I0526 21:25:14.206538  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.726733995Z" level=info msg="Start recovering state"
	I0526 21:25:14.206559  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.781395051Z" level=info msg="Start event monitor"
	I0526 21:25:14.206578  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.781771001Z" level=info msg="Start snapshots syncer"
	I0526 21:25:14.206598  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.781893491Z" level=info msg="Start cni network conf syncer"
	I0526 21:25:14.206619  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.782003464Z" level=info msg="Start streaming server"
	I0526 21:25:14.206647  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:28.484581294Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:kube-controller-manager-multinode-20210526212238-510955,Uid:474c55dfb64741cc485e46b6bb9f2dc0,Namespace:kube-system,Attempt:0,}"
	I0526 21:25:14.206668  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:28.490843770Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:kube-scheduler-multinode-20210526212238-510955,Uid:6b4a0ee8b3d15a1c2e47c15d32e6eb0d,Namespace:kube-system,Attempt:0,}"
	I0526 21:25:14.206688  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:28.501056680Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:kube-apiserver-multinode-20210526212238-510955,Uid:b42b6879229f245abab6047de8662a2f,Namespace:kube-system,Attempt:0,}"
	I0526 21:25:14.206714  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:28.508591647Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:etcd-multinode-20210526212238-510955,Uid:34530b4d5ce1b17919f3b8976b2d0456,Namespace:kube-system,Attempt:0,}"
	I0526 21:25:14.206742  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:28.580716340Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fe43674906f2080850da99c25995a18c2583bfda5a6a21d58f51cb45f673d486 pid=2407
	I0526 21:25:14.206772  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:28.598809833Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/24fd8b8599a6ee5e09c19d4ce15908360ea29418f2bbe3b0ba2d12f73a3519fb pid=2435
	I0526 21:25:14.206801  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:28.602060491Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/73ada73fbbf0b2a7b4a40791347e9a5a366e1f52a347203f20a27bcb2813b6c5 pid=2434
	I0526 21:25:14.206831  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:28.602007310Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2ad404c6a9c449ae1ebfab12355673229979a8ee4cf4d87f94b5ca073d31b43e pid=2452
	I0526 21:25:14.206862  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:29.066808539Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-multinode-20210526212238-510955,Uid:b42b6879229f245abab6047de8662a2f,Namespace:kube-system,Attempt:0,} returns sandbox id \"fe43674906f2080850da99c25995a18c2583bfda5a6a21d58f51cb45f673d486\""
	I0526 21:25:14.206885  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:29.074803022Z" level=info msg="CreateContainer within sandbox \"fe43674906f2080850da99c25995a18c2583bfda5a6a21d58f51cb45f673d486\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}"
	I0526 21:25:14.206908  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:29.194718464Z" level=info msg="CreateContainer within sandbox \"fe43674906f2080850da99c25995a18c2583bfda5a6a21d58f51cb45f673d486\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a0581c0e5409bf38a71c61d4cf776a7fcd3dc38c24a5a6a52165552afd2bc85c\""
	I0526 21:25:14.206924  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:29.196219933Z" level=info msg="StartContainer for \"a0581c0e5409bf38a71c61d4cf776a7fcd3dc38c24a5a6a52165552afd2bc85c\""
	I0526 21:25:14.206949  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:29.262678371Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-multinode-20210526212238-510955,Uid:474c55dfb64741cc485e46b6bb9f2dc0,Namespace:kube-system,Attempt:0,} returns sandbox id \"73ada73fbbf0b2a7b4a40791347e9a5a366e1f52a347203f20a27bcb2813b6c5\""
	I0526 21:25:14.206969  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:29.272571919Z" level=info msg="CreateContainer within sandbox \"73ada73fbbf0b2a7b4a40791347e9a5a366e1f52a347203f20a27bcb2813b6c5\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}"
	I0526 21:25:14.206996  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:29.347228547Z" level=info msg="CreateContainer within sandbox \"73ada73fbbf0b2a7b4a40791347e9a5a366e1f52a347203f20a27bcb2813b6c5\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"2314e41b1b44395e7b5a7c32d6859941d4a5db601171f0dd18ddefb571d25f18\""
	I0526 21:25:14.207018  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:29.349365690Z" level=info msg="StartContainer for \"2314e41b1b44395e7b5a7c32d6859941d4a5db601171f0dd18ddefb571d25f18\""
	I0526 21:25:14.207044  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:29.419043703Z" level=info msg="StartContainer for \"a0581c0e5409bf38a71c61d4cf776a7fcd3dc38c24a5a6a52165552afd2bc85c\" returns successfully"
	I0526 21:25:14.207077  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:29.520520792Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-multinode-20210526212238-510955,Uid:6b4a0ee8b3d15a1c2e47c15d32e6eb0d,Namespace:kube-system,Attempt:0,} returns sandbox id \"24fd8b8599a6ee5e09c19d4ce15908360ea29418f2bbe3b0ba2d12f73a3519fb\""
	I0526 21:25:14.207107  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:29.527415671Z" level=info msg="CreateContainer within sandbox \"24fd8b8599a6ee5e09c19d4ce15908360ea29418f2bbe3b0ba2d12f73a3519fb\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}"
	I0526 21:25:14.207140  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:29.566421321Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:etcd-multinode-20210526212238-510955,Uid:34530b4d5ce1b17919f3b8976b2d0456,Namespace:kube-system,Attempt:0,} returns sandbox id \"2ad404c6a9c449ae1ebfab12355673229979a8ee4cf4d87f94b5ca073d31b43e\""
	I0526 21:25:14.207168  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:29.575850717Z" level=info msg="CreateContainer within sandbox \"2ad404c6a9c449ae1ebfab12355673229979a8ee4cf4d87f94b5ca073d31b43e\" for container &ContainerMetadata{Name:etcd,Attempt:0,}"
	I0526 21:25:14.207194  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:29.621335319Z" level=info msg="CreateContainer within sandbox \"24fd8b8599a6ee5e09c19d4ce15908360ea29418f2bbe3b0ba2d12f73a3519fb\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e6bb9bee7539aa8f41eb665706dd077c17b8c28a54090cf4fffe439978194c08\""
	I0526 21:25:14.207211  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:29.623169879Z" level=info msg="StartContainer for \"e6bb9bee7539aa8f41eb665706dd077c17b8c28a54090cf4fffe439978194c08\""
	I0526 21:25:14.207228  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:29.681255114Z" level=info msg="StartContainer for \"2314e41b1b44395e7b5a7c32d6859941d4a5db601171f0dd18ddefb571d25f18\" returns successfully"
	I0526 21:25:14.207250  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:29.683704929Z" level=info msg="CreateContainer within sandbox \"2ad404c6a9c449ae1ebfab12355673229979a8ee4cf4d87f94b5ca073d31b43e\" for &ContainerMetadata{Name:etcd,Attempt:0,} returns container id \"c8538106e966bebc2d596c1688fdb11c413a37e122bd18c454b4db5bec7f55ad\""
	I0526 21:25:14.207272  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:29.684577023Z" level=info msg="StartContainer for \"c8538106e966bebc2d596c1688fdb11c413a37e122bd18c454b4db5bec7f55ad\""
	I0526 21:25:14.207290  527485 command_runner.go:124] > May 26 21:23:30 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:30.017920282Z" level=info msg="StartContainer for \"c8538106e966bebc2d596c1688fdb11c413a37e122bd18c454b4db5bec7f55ad\" returns successfully"
	I0526 21:25:14.207307  527485 command_runner.go:124] > May 26 21:23:30 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:30.056525418Z" level=info msg="StartContainer for \"e6bb9bee7539aa8f41eb665706dd077c17b8c28a54090cf4fffe439978194c08\" returns successfully"
	I0526 21:25:14.207364  527485 command_runner.go:124] > May 26 21:23:53 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:53.290788536Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
	I0526 21:25:14.207386  527485 command_runner.go:124] > May 26 21:23:53 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:53.802102062Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:kindnet-2wgbs,Uid:aac3ff91-8f9c-4f4e-81fc-a859f780d67d,Namespace:kube-system,Attempt:0,}"
	I0526 21:25:14.207410  527485 command_runner.go:124] > May 26 21:23:53 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:53.839975209Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/53490c652b9e5b3b552f7ca74d9a84e9c42a1849a932e5f024b22d340d5734e8 pid=2987
	I0526 21:25:14.207437  527485 command_runner.go:124] > May 26 21:23:53 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:53.915628984Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:kube-proxy-qbl42,Uid:950a915d-c5f0-4e6f-bc12-ee97013032f0,Namespace:kube-system,Attempt:0,}"
	I0526 21:25:14.207469  527485 command_runner.go:124] > May 26 21:23:53 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:53.950847165Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/038c42970362d9798abb36c3983856aa352e67a59ca9ce5f3e1852c03634a59a pid=3013
	I0526 21:25:14.207500  527485 command_runner.go:124] > May 26 21:23:54 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:54.116312794Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qbl42,Uid:950a915d-c5f0-4e6f-bc12-ee97013032f0,Namespace:kube-system,Attempt:0,} returns sandbox id \"038c42970362d9798abb36c3983856aa352e67a59ca9ce5f3e1852c03634a59a\""
	I0526 21:25:14.207529  527485 command_runner.go:124] > May 26 21:23:54 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:54.127305490Z" level=info msg="CreateContainer within sandbox \"038c42970362d9798abb36c3983856aa352e67a59ca9ce5f3e1852c03634a59a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}"
	I0526 21:25:14.207562  527485 command_runner.go:124] > May 26 21:23:54 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:54.182202148Z" level=info msg="CreateContainer within sandbox \"038c42970362d9798abb36c3983856aa352e67a59ca9ce5f3e1852c03634a59a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"de6efc6fec4b20903efb849817a9e57e284841a9fc6ef9bd1b83e563316cdaa2\""
	I0526 21:25:14.207586  527485 command_runner.go:124] > May 26 21:23:54 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:54.188910123Z" level=info msg="StartContainer for \"de6efc6fec4b20903efb849817a9e57e284841a9fc6ef9bd1b83e563316cdaa2\""
	I0526 21:25:14.207608  527485 command_runner.go:124] > May 26 21:23:54 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:54.381612238Z" level=info msg="StartContainer for \"de6efc6fec4b20903efb849817a9e57e284841a9fc6ef9bd1b83e563316cdaa2\" returns successfully"
	I0526 21:25:14.207629  527485 command_runner.go:124] > May 26 21:23:55 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:55.674364903Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/library/minikube-local-cache-test:functional-20210526211257-510955,Labels:map[string]string{},XXX_unrecognized:[],}"
	I0526 21:25:14.207651  527485 command_runner.go:124] > May 26 21:23:55 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:55.683119285Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d019ff3187ef5660d1df17b8caf469d5fc50b72267134348e040397c4d49d830,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
	I0526 21:25:14.207674  527485 command_runner.go:124] > May 26 21:23:55 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:55.683711665Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/library/minikube-local-cache-test:functional-20210526211257-510955,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
	I0526 21:25:14.207692  527485 command_runner.go:124] > May 26 21:23:56 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:56.582858367Z" level=error msg="get state for 53490c652b9e5b3b552f7ca74d9a84e9c42a1849a932e5f024b22d340d5734e8" error="context deadline exceeded: unknown"
	I0526 21:25:14.207711  527485 command_runner.go:124] > May 26 21:23:56 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:56.582967226Z" level=warning msg="unknown status" status=0
	I0526 21:25:14.207744  527485 command_runner.go:124] > May 26 21:23:56 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:56.969753374Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kindnet-2wgbs,Uid:aac3ff91-8f9c-4f4e-81fc-a859f780d67d,Namespace:kube-system,Attempt:0,} returns sandbox id \"53490c652b9e5b3b552f7ca74d9a84e9c42a1849a932e5f024b22d340d5734e8\""
	I0526 21:25:14.207773  527485 command_runner.go:124] > May 26 21:23:56 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:56.975070195Z" level=info msg="CreateContainer within sandbox \"53490c652b9e5b3b552f7ca74d9a84e9c42a1849a932e5f024b22d340d5734e8\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:0,}"
	I0526 21:25:14.207807  527485 command_runner.go:124] > May 26 21:23:57 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:57.027887855Z" level=info msg="CreateContainer within sandbox \"53490c652b9e5b3b552f7ca74d9a84e9c42a1849a932e5f024b22d340d5734e8\" for &ContainerMetadata{Name:kindnet-cni,Attempt:0,} returns container id \"69df1859ce4d1a30c4660b7f63cb09e13d69f3813d39620e6ca8dc830b4388bf\""
	I0526 21:25:14.207833  527485 command_runner.go:124] > May 26 21:23:57 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:57.029566085Z" level=info msg="StartContainer for \"69df1859ce4d1a30c4660b7f63cb09e13d69f3813d39620e6ca8dc830b4388bf\""
	I0526 21:25:14.207858  527485 command_runner.go:124] > May 26 21:23:57 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:57.574608517Z" level=info msg="StartContainer for \"69df1859ce4d1a30c4660b7f63cb09e13d69f3813d39620e6ca8dc830b4388bf\" returns successfully"
	I0526 21:25:14.207886  527485 command_runner.go:124] > May 26 21:24:09 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:24:09.297649575Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:storage-provisioner,Uid:e29f2487-1ac5-4bb4-9ab8-6b5d5c43ec36,Namespace:kube-system,Attempt:0,}"
	I0526 21:25:14.207912  527485 command_runner.go:124] > May 26 21:24:09 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:24:09.323344186Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:coredns-74ff55c5b-tw67b,Uid:a0522c32-9960-4c21-8a5a-d0b137009166,Namespace:kube-system,Attempt:0,}"
	I0526 21:25:14.207941  527485 command_runner.go:124] > May 26 21:24:09 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:24:09.332120092Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/722b1b257c571a73cb4686c5476aba37030267ae95e826e47362be9c166adb55 pid=3313
	I0526 21:25:14.207969  527485 command_runner.go:124] > May 26 21:24:09 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:24:09.442356819Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1d96eb581f035bbd8a09d1caefefe610196dd7fb21d1b74e5f155bddc0a54900 pid=3376
	I0526 21:25:14.207994  527485 command_runner.go:124] > May 26 21:24:09 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:24:09.792546853Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:storage-provisioner,Uid:e29f2487-1ac5-4bb4-9ab8-6b5d5c43ec36,Namespace:kube-system,Attempt:0,} returns sandbox id \"722b1b257c571a73cb4686c5476aba37030267ae95e826e47362be9c166adb55\""
	I0526 21:25:14.208013  527485 command_runner.go:124] > May 26 21:24:09 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:24:09.796339883Z" level=info msg="CreateContainer within sandbox \"722b1b257c571a73cb4686c5476aba37030267ae95e826e47362be9c166adb55\" for container &ContainerMetadata{Name:storage-provisioner,Attempt:0,}"
	I0526 21:25:14.208035  527485 command_runner.go:124] > May 26 21:24:09 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:24:09.843281999Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-74ff55c5b-tw67b,Uid:a0522c32-9960-4c21-8a5a-d0b137009166,Namespace:kube-system,Attempt:0,} returns sandbox id \"1d96eb581f035bbd8a09d1caefefe610196dd7fb21d1b74e5f155bddc0a54900\""
	I0526 21:25:14.208056  527485 command_runner.go:124] > May 26 21:24:09 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:24:09.849108598Z" level=info msg="CreateContainer within sandbox \"1d96eb581f035bbd8a09d1caefefe610196dd7fb21d1b74e5f155bddc0a54900\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
	I0526 21:25:14.208079  527485 command_runner.go:124] > May 26 21:24:09 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:24:09.875948742Z" level=info msg="CreateContainer within sandbox \"722b1b257c571a73cb4686c5476aba37030267ae95e826e47362be9c166adb55\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"5d3df8c94eaedafbc7bb1e79012cbc72bc2ec88687f4bfc6f10fff8c1eb3107d\""
	I0526 21:25:14.208094  527485 command_runner.go:124] > May 26 21:24:09 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:24:09.879073015Z" level=info msg="StartContainer for \"5d3df8c94eaedafbc7bb1e79012cbc72bc2ec88687f4bfc6f10fff8c1eb3107d\""
	I0526 21:25:14.208116  527485 command_runner.go:124] > May 26 21:24:09 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:24:09.915826719Z" level=info msg="CreateContainer within sandbox \"1d96eb581f035bbd8a09d1caefefe610196dd7fb21d1b74e5f155bddc0a54900\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a9593dff4428d4d0f5cc31832cefb42bf39b62a98d32d295dd9502a1ef6d307a\""
	I0526 21:25:14.208131  527485 command_runner.go:124] > May 26 21:24:09 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:24:09.918179651Z" level=info msg="StartContainer for \"a9593dff4428d4d0f5cc31832cefb42bf39b62a98d32d295dd9502a1ef6d307a\""
	I0526 21:25:14.208149  527485 command_runner.go:124] > May 26 21:24:10 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:24:10.083539707Z" level=info msg="StartContainer for \"5d3df8c94eaedafbc7bb1e79012cbc72bc2ec88687f4bfc6f10fff8c1eb3107d\" returns successfully"
	I0526 21:25:14.208166  527485 command_runner.go:124] > May 26 21:24:10 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:24:10.120722012Z" level=info msg="StartContainer for \"a9593dff4428d4d0f5cc31832cefb42bf39b62a98d32d295dd9502a1ef6d307a\" returns successfully"
	I0526 21:25:14.226621  527485 logs.go:123] Gathering logs for kubelet ...
	I0526 21:25:14.226646  527485 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0526 21:25:14.238565  527485 command_runner.go:124] > -- Logs begin at Wed 2021-05-26 21:22:49 UTC, end at Wed 2021-05-26 21:25:14 UTC. --
	I0526 21:25:14.238594  527485 command_runner.go:124] > May 26 21:23:21 multinode-20210526212238-510955 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0526 21:25:14.238624  527485 command_runner.go:124] > May 26 21:23:21 multinode-20210526212238-510955 kubelet[2343]: Flag --runtime-request-timeout has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	I0526 21:25:14.238677  527485 command_runner.go:124] > May 26 21:23:21 multinode-20210526212238-510955 kubelet[2343]: Flag --runtime-request-timeout has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	I0526 21:25:14.238691  527485 command_runner.go:124] > May 26 21:23:22 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:22.365155    2343 server.go:416] Version: v1.20.2
	I0526 21:25:14.238717  527485 command_runner.go:124] > May 26 21:23:22 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:22.365664    2343 server.go:837] Client rotation is on, will bootstrap in background
	I0526 21:25:14.238744  527485 command_runner.go:124] > May 26 21:23:22 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:22.382328    2343 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	I0526 21:25:14.238775  527485 command_runner.go:124] > May 26 21:23:22 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:22.383887    2343 certificate_manager.go:437] Failed while requesting a signed certificate from the master: cannot create certificate signing request: Post "https://control-plane.minikube.internal:8443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:14.238812  527485 command_runner.go:124] > May 26 21:23:24 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:24.586559    2343 certificate_manager.go:437] Failed while requesting a signed certificate from the master: cannot create certificate signing request: Post "https://control-plane.minikube.internal:8443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:14.238838  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.392858    2343 server.go:645] --cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /
	I0526 21:25:14.238861  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.393993    2343 container_manager_linux.go:274] container manager verified user specified cgroup-root exists: []
	I0526 21:25:14.238911  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.394298    2343 container_manager_linux.go:279] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:remote CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalTopologyManagerScope:container ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none}
	I0526 21:25:14.238945  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.395126    2343 topology_manager.go:120] [topologymanager] Creating topology manager with none policy per container scope
	I0526 21:25:14.238958  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.395334    2343 container_manager_linux.go:310] [topologymanager] Initializing Topology Manager with none policy and container-level scope
	I0526 21:25:14.238972  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.395348    2343 container_manager_linux.go:315] Creating device plugin manager: true
	I0526 21:25:14.238984  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.395816    2343 remote_runtime.go:62] parsed scheme: ""
	I0526 21:25:14.239001  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.395929    2343 remote_runtime.go:62] scheme "" not registered, fallback to default scheme
	I0526 21:25:14.239020  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.396315    2343 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}
	I0526 21:25:14.239033  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.396571    2343 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0526 21:25:14.239045  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.397666    2343 remote_image.go:50] parsed scheme: ""
	I0526 21:25:14.239058  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.397691    2343 remote_image.go:50] scheme "" not registered, fallback to default scheme
	I0526 21:25:14.239075  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.397829    2343 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}
	I0526 21:25:14.239089  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.397957    2343 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0526 21:25:14.239103  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.400786    2343 kubelet.go:262] Adding pod path: /etc/kubernetes/manifests
	I0526 21:25:14.239115  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.401761    2343 kubelet.go:273] Watching apiserver
	I0526 21:25:14.239137  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:27.419726    2343 reflector.go:138] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://control-plane.minikube.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%!D(MISSING)multinode-20210526212238-510955&limit=500&resourceVersion=0": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:14.239159  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:27.433343    2343 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:14.239174  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.434846    2343 kuberuntime_manager.go:216] Container runtime containerd initialized, version: v1.4.4, apiVersion: v1alpha2
	I0526 21:25:14.239199  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:27.435179    2343 reflector.go:138] k8s.io/kubernetes/pkg/kubelet/kubelet.go:438: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-20210526212238-510955&limit=500&resourceVersion=0": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:14.239215  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:27.695431    2343 aws_credentials.go:77] while getting AWS credentials NoCredentialProviders: no valid providers in chain. Deprecated.
	I0526 21:25:14.239231  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]:         For verbose messaging see aws.Config.CredentialsChainVerboseErrors
	I0526 21:25:14.239247  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: W0526 21:23:27.696850    2343 probe.go:268] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
	I0526 21:25:14.239264  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.698714    2343 server.go:1176] Started kubelet
	I0526 21:25:14.239277  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.699681    2343 server.go:148] Starting to listen on 0.0.0.0:10250
	I0526 21:25:14.239290  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.701131    2343 server.go:410] Adding debug handlers to kubelet server.
	I0526 21:25:14.239391  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:27.701698    2343 event.go:273] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"multinode-20210526212238-510955.1682bacd86c17a5a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"multinode-20210526212238-510955", UID:"multinode-20210526212238-510955", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.",
Source:v1.EventSource{Component:"kubelet", Host:"multinode-20210526212238-510955"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc023ccf3e9a5245a, ext:5868438524, loc:(*time.Location)(0x70d1080)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc023ccf3e9a5245a, ext:5868438524, loc:(*time.Location)(0x70d1080)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events": dial tcp 192.168.39.229:8443: connect: connection refused'(may retry after sleeping)
	I0526 21:25:14.239422  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.703923    2343 fs_resource_analyzer.go:64] Starting FS ResourceAnalyzer
	I0526 21:25:14.239432  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.707734    2343 volume_manager.go:271] Starting Kubelet Volume Manager
	I0526 21:25:14.239447  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.708096    2343 desired_state_of_world_populator.go:142] Desired state populator starts to run
	I0526 21:25:14.239471  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:27.708889    2343 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:14.239494  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:27.709701    2343 controller.go:144] failed to ensure lease exists, will retry in 200ms, error: Get "https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-20210526212238-510955?timeout=10s": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:14.239512  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:27.711040    2343 kubelet.go:2163] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	I0526 21:25:14.239524  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.711583    2343 client.go:86] parsed scheme: "unix"
	I0526 21:25:14.239538  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.711779    2343 client.go:86] scheme "unix" not registered, fallback to default scheme
	I0526 21:25:14.239554  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.712280    2343 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}
	I0526 21:25:14.239569  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.712673    2343 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0526 21:25:14.239583  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.782226    2343 cpu_manager.go:193] [cpumanager] starting with none policy
	I0526 21:25:14.239595  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.782318    2343 cpu_manager.go:194] [cpumanager] reconciling every 10s
	I0526 21:25:14.239606  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.782638    2343 state_mem.go:36] [cpumanager] initializing new in-memory state store
	I0526 21:25:14.239687  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:27.799125    2343 event.go:273] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"multinode-20210526212238-510955.1682bacd86c17a5a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"multinode-20210526212238-510955", UID:"multinode-20210526212238-510955", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.",
Source:v1.EventSource{Component:"kubelet", Host:"multinode-20210526212238-510955"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc023ccf3e9a5245a, ext:5868438524, loc:(*time.Location)(0x70d1080)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc023ccf3e9a5245a, ext:5868438524, loc:(*time.Location)(0x70d1080)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events": dial tcp 192.168.39.229:8443: connect: connection refused'(may retry after sleeping)
	I0526 21:25:14.239705  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.809183    2343 kubelet_node_status.go:71] Attempting to register node multinode-20210526212238-510955
	I0526 21:25:14.239737  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:27.810505    2343 kubelet_node_status.go:93] Unable to register node "multinode-20210526212238-510955" with API server: Post "https://control-plane.minikube.internal:8443/api/v1/nodes": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:14.239757  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:27.810636    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:14.239777  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.876097    2343 kubelet_network_linux.go:56] Initialized IPv4 iptables rules.
	I0526 21:25:14.239796  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.876127    2343 status_manager.go:158] Starting to sync pod status with apiserver
	I0526 21:25:14.239817  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.876145    2343 kubelet.go:1802] Starting kubelet main sync loop.
	I0526 21:25:14.239843  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:27.876191    2343 kubelet.go:1826] skipping pod synchronization - [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
	I0526 21:25:14.239878  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:27.877853    2343 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:14.239914  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:27.910604    2343 controller.go:144] failed to ensure lease exists, will retry in 400ms, error: Get "https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-20210526212238-510955?timeout=10s": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:14.239931  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:27.910787    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:14.239947  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:27.976408    2343 kubelet.go:1826] skipping pod synchronization - container runtime status check may not have completed yet
	I0526 21:25:14.239962  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.987845    2343 policy_none.go:43] [cpumanager] none policy: Start
	I0526 21:25:14.239997  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: W0526 21:23:28.000709    2343 manager.go:594] Failed to retrieve checkpoint for "kubelet_internal_checkpoint": checkpoint is not found
	I0526 21:25:14.240011  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:28.001042    2343 plugin_manager.go:114] Starting Kubelet Plugin Manager
	I0526 21:25:14.240028  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:28.004395    2343 eviction_manager.go:260] eviction manager: failed to get summary stats: failed to get node info: node "multinode-20210526212238-510955" not found
	I0526 21:25:14.240041  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:28.010900    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:14.240055  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:28.011906    2343 kubelet_node_status.go:71] Attempting to register node multinode-20210526212238-510955
	I0526 21:25:14.240076  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:28.012281    2343 kubelet_node_status.go:93] Unable to register node "multinode-20210526212238-510955" with API server: Post "https://control-plane.minikube.internal:8443/api/v1/nodes": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:14.240091  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:28.111839    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:14.240105  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:28.177382    2343 topology_manager.go:187] [topologymanager] Topology Admit Handler
	I0526 21:25:14.240119  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:28.180087    2343 topology_manager.go:187] [topologymanager] Topology Admit Handler
	I0526 21:25:14.240133  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:28.181373    2343 topology_manager.go:187] [topologymanager] Topology Admit Handler
	I0526 21:25:14.240146  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:28.182941    2343 topology_manager.go:187] [topologymanager] Topology Admit Handler
	I0526 21:25:14.240174  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: W0526 21:23:28.185069    2343 status_manager.go:550] Failed to get status for pod "kube-controller-manager-multinode-20210526212238-510955_kube-system(474c55dfb64741cc485e46b6bb9f2dc0)": Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20210526212238-510955": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:14.240200  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: W0526 21:23:28.185417    2343 status_manager.go:550] Failed to get status for pod "kube-scheduler-multinode-20210526212238-510955_kube-system(6b4a0ee8b3d15a1c2e47c15d32e6eb0d)": Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-20210526212238-510955": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:14.240226  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: W0526 21:23:28.201047    2343 status_manager.go:550] Failed to get status for pod "kube-apiserver-multinode-20210526212238-510955_kube-system(b42b6879229f245abab6047de8662a2f)": Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-20210526212238-510955": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:14.240250  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: W0526 21:23:28.202364    2343 status_manager.go:550] Failed to get status for pod "etcd-multinode-20210526212238-510955_kube-system(34530b4d5ce1b17919f3b8976b2d0456)": Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:14.240270  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:28.212215    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:14.240294  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:28.309602    2343 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/b42b6879229f245abab6047de8662a2f-ca-certs") pod "kube-apiserver-multinode-20210526212238-510955" (UID: "b42b6879229f245abab6047de8662a2f")
	I0526 21:25:14.240320  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:28.309839    2343 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/b42b6879229f245abab6047de8662a2f-k8s-certs") pod "kube-apiserver-multinode-20210526212238-510955" (UID: "b42b6879229f245abab6047de8662a2f")
	I0526 21:25:14.240344  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:28.310062    2343 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/b42b6879229f245abab6047de8662a2f-usr-share-ca-certificates") pod "kube-apiserver-multinode-20210526212238-510955" (UID: "b42b6879229f245abab6047de8662a2f")
	I0526 21:25:14.240370  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:28.310275    2343 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/474c55dfb64741cc485e46b6bb9f2dc0-ca-certs") pod "kube-controller-manager-multinode-20210526212238-510955" (UID: "474c55dfb64741cc485e46b6bb9f2dc0")
	I0526 21:25:14.240393  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:28.310572    2343 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/474c55dfb64741cc485e46b6bb9f2dc0-k8s-certs") pod "kube-controller-manager-multinode-20210526212238-510955" (UID: "474c55dfb64741cc485e46b6bb9f2dc0")
	I0526 21:25:14.240417  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:28.310900    2343 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/474c55dfb64741cc485e46b6bb9f2dc0-kubeconfig") pod "kube-controller-manager-multinode-20210526212238-510955" (UID: "474c55dfb64741cc485e46b6bb9f2dc0")
	I0526 21:25:14.240441  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:28.311066    2343 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-certs" (UniqueName: "kubernetes.io/host-path/34530b4d5ce1b17919f3b8976b2d0456-etcd-certs") pod "etcd-multinode-20210526212238-510955" (UID: "34530b4d5ce1b17919f3b8976b2d0456")
	I0526 21:25:14.240466  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:28.311200    2343 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "flexvolume-dir" (UniqueName: "kubernetes.io/host-path/474c55dfb64741cc485e46b6bb9f2dc0-flexvolume-dir") pod "kube-controller-manager-multinode-20210526212238-510955" (UID: "474c55dfb64741cc485e46b6bb9f2dc0")
	I0526 21:25:14.240491  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:28.311326    2343 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/474c55dfb64741cc485e46b6bb9f2dc0-usr-share-ca-certificates") pod "kube-controller-manager-multinode-20210526212238-510955" (UID: "474c55dfb64741cc485e46b6bb9f2dc0")
	I0526 21:25:14.240514  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:28.311324    2343 controller.go:144] failed to ensure lease exists, will retry in 800ms, error: Get "https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-20210526212238-510955?timeout=10s": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:14.240538  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:28.311643    2343 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/6b4a0ee8b3d15a1c2e47c15d32e6eb0d-kubeconfig") pod "kube-scheduler-multinode-20210526212238-510955" (UID: "6b4a0ee8b3d15a1c2e47c15d32e6eb0d")
	I0526 21:25:14.240561  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:28.311955    2343 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-data" (UniqueName: "kubernetes.io/host-path/34530b4d5ce1b17919f3b8976b2d0456-etcd-data") pod "etcd-multinode-20210526212238-510955" (UID: "34530b4d5ce1b17919f3b8976b2d0456")
	I0526 21:25:14.240574  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:28.312763    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:14.240599  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:28.318006    2343 reflector.go:138] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://control-plane.minikube.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%!D(MISSING)multinode-20210526212238-510955&limit=500&resourceVersion=0": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:14.240624  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:28.361617    2343 reflector.go:138] k8s.io/kubernetes/pkg/kubelet/kubelet.go:438: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-20210526212238-510955&limit=500&resourceVersion=0": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:14.240637  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:28.412938    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:14.240651  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:28.414299    2343 kubelet_node_status.go:71] Attempting to register node multinode-20210526212238-510955
	I0526 21:25:14.240671  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:28.420140    2343 kubelet_node_status.go:93] Unable to register node "multinode-20210526212238-510955" with API server: Post "https://control-plane.minikube.internal:8443/api/v1/nodes": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:14.240684  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:28.513925    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:14.240698  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:28.614235    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:14.240732  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:28.620010    2343 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:14.240753  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:28.714407    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:14.240788  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:28.717664    2343 certificate_manager.go:437] Failed while requesting a signed certificate from the master: cannot create certificate signing request: Post "https://control-plane.minikube.internal:8443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:14.240809  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:28.815037    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:14.240835  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:28.819848    2343 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:14.240849  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:28.915364    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:14.240877  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:29.015843    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:14.240902  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:29.112804    2343 controller.go:144] failed to ensure lease exists, will retry in 1.6s, error: Get "https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-20210526212238-510955?timeout=10s": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:14.240916  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:29.116234    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:14.240929  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:29.217167    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:14.240953  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:29.219890    2343 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:14.240967  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:29.223096    2343 kubelet_node_status.go:71] Attempting to register node multinode-20210526212238-510955
	I0526 21:25:14.240981  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:29.317528    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:14.240994  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:29.418231    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:14.241014  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:29.419707    2343 kubelet_node_status.go:93] Unable to register node "multinode-20210526212238-510955" with API server: Post "https://control-plane.minikube.internal:8443/api/v1/nodes": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:14.241027  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:29.520018    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:14.241040  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:29.620736    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:14.241053  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:29.721115    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:14.241071  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:29.821411    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:14.241093  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:29.921772    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:14.241113  527485 command_runner.go:124] > May 26 21:23:30 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:30.022147    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:14.241133  527485 command_runner.go:124] > May 26 21:23:30 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:30.122970    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:14.241153  527485 command_runner.go:124] > May 26 21:23:30 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:30.223407    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:14.241178  527485 command_runner.go:124] > May 26 21:23:30 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:30.323609    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:14.241196  527485 command_runner.go:124] > May 26 21:23:30 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:30.424033    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:14.241213  527485 command_runner.go:124] > May 26 21:23:30 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:30.524613    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:14.241227  527485 command_runner.go:124] > May 26 21:23:30 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:30.625186    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:14.241240  527485 command_runner.go:124] > May 26 21:23:30 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:30.725563    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:14.241255  527485 command_runner.go:124] > May 26 21:23:30 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:30.826076    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:14.241272  527485 command_runner.go:124] > May 26 21:23:30 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:30.932677    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:14.241287  527485 command_runner.go:124] > May 26 21:23:31 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:31.021296    2343 kubelet_node_status.go:71] Attempting to register node multinode-20210526212238-510955
	I0526 21:25:14.241343  527485 command_runner.go:124] > May 26 21:23:31 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:31.033185    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:14.241364  527485 command_runner.go:124] > May 26 21:23:31 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:31.133540    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:14.241383  527485 command_runner.go:124] > May 26 21:23:31 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:31.234158    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:14.241404  527485 command_runner.go:124] > May 26 21:23:31 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:31.334934    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:14.241425  527485 command_runner.go:124] > May 26 21:23:31 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:31.435265    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:14.241445  527485 command_runner.go:124] > May 26 21:23:31 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:31.535646    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:14.241462  527485 command_runner.go:124] > May 26 21:23:31 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:31.636091    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:14.241478  527485 command_runner.go:124] > May 26 21:23:31 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:31.736769    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:14.241490  527485 command_runner.go:124] > May 26 21:23:31 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:31.837337    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:14.241502  527485 command_runner.go:124] > May 26 21:23:31 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:31.937851    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:14.241524  527485 command_runner.go:124] > May 26 21:23:32 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:32.038171    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:14.241537  527485 command_runner.go:124] > May 26 21:23:32 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:32.138719    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:14.241548  527485 command_runner.go:124] > May 26 21:23:32 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:32.239058    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:14.241559  527485 command_runner.go:124] > May 26 21:23:32 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:32.339598    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:14.241572  527485 command_runner.go:124] > May 26 21:23:32 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:32.440290    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:14.241584  527485 command_runner.go:124] > May 26 21:23:32 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:32.540624    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:14.241596  527485 command_runner.go:124] > May 26 21:23:32 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:32.641006    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:14.241608  527485 command_runner.go:124] > May 26 21:23:32 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:32.741403    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:14.241619  527485 command_runner.go:124] > May 26 21:23:32 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:32.841966    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:14.241631  527485 command_runner.go:124] > May 26 21:23:32 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:32.942585    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:14.241647  527485 command_runner.go:124] > May 26 21:23:33 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:33.002095    2343 kubelet.go:2163] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	I0526 21:25:14.241661  527485 command_runner.go:124] > May 26 21:23:33 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:33.042747    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:14.241681  527485 command_runner.go:124] > May 26 21:23:33 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:33.142869    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:14.241695  527485 command_runner.go:124] > May 26 21:23:33 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:33.243254    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:14.241714  527485 command_runner.go:124] > May 26 21:23:33 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:33.343706    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:14.241735  527485 command_runner.go:124] > May 26 21:23:33 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:33.444105    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:14.241756  527485 command_runner.go:124] > May 26 21:23:33 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:33.545421    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:14.241777  527485 command_runner.go:124] > May 26 21:23:33 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:33.645867    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:14.241797  527485 command_runner.go:124] > May 26 21:23:33 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:33.746343    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:14.241816  527485 command_runner.go:124] > May 26 21:23:33 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:33.846868    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:14.241836  527485 command_runner.go:124] > May 26 21:23:33 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:33.947104    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:14.241856  527485 command_runner.go:124] > May 26 21:23:34 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:34.047842    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:14.241875  527485 command_runner.go:124] > May 26 21:23:34 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:34.148334    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:14.241898  527485 command_runner.go:124] > May 26 21:23:34 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:34.248550    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:14.241918  527485 command_runner.go:124] > May 26 21:23:34 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:34.349232    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:14.241938  527485 command_runner.go:124] > May 26 21:23:34 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:34.449632    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:14.241957  527485 command_runner.go:124] > May 26 21:23:34 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:34.549987    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:14.241977  527485 command_runner.go:124] > May 26 21:23:34 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:34.650314    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:14.241995  527485 command_runner.go:124] > May 26 21:23:34 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:34.751182    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:14.242009  527485 command_runner.go:124] > May 26 21:23:34 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:34.832693    2343 reconciler.go:157] Reconciler: start to sync state
	I0526 21:25:14.242025  527485 command_runner.go:124] > May 26 21:23:34 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:34.841269    2343 nodelease.go:49] failed to get node "multinode-20210526212238-510955" when trying to set owner ref to the node lease: nodes "multinode-20210526212238-510955" not found
	I0526 21:25:14.242040  527485 command_runner.go:124] > May 26 21:23:34 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:34.851652    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:14.242061  527485 command_runner.go:124] > May 26 21:23:34 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:34.952325    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:14.242084  527485 command_runner.go:124] > May 26 21:23:35 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:35.015600    2343 kubelet_node_status.go:74] Successfully registered node multinode-20210526212238-510955
	I0526 21:25:14.242111  527485 command_runner.go:124] > May 26 21:23:38 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:38.003372    2343 kubelet.go:2163] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	I0526 21:25:14.242134  527485 command_runner.go:124] > May 26 21:23:38 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:38.252332    2343 dynamic_cafile_content.go:182] Shutting down client-ca-bundle::/var/lib/minikube/certs/ca.crt
	I0526 21:25:14.242155  527485 command_runner.go:124] > May 26 21:23:38 multinode-20210526212238-510955 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	I0526 21:25:14.242171  527485 command_runner.go:124] > May 26 21:23:38 multinode-20210526212238-510955 systemd[1]: kubelet.service: Succeeded.
	I0526 21:25:14.242187  527485 command_runner.go:124] > May 26 21:23:38 multinode-20210526212238-510955 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0526 21:25:14.242204  527485 command_runner.go:124] > May 26 21:23:38 multinode-20210526212238-510955 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0526 21:25:14.242233  527485 command_runner.go:124] > May 26 21:23:38 multinode-20210526212238-510955 kubelet[2767]: Flag --runtime-request-timeout has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	I0526 21:25:14.242270  527485 command_runner.go:124] > May 26 21:23:38 multinode-20210526212238-510955 kubelet[2767]: Flag --runtime-request-timeout has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	I0526 21:25:14.242289  527485 command_runner.go:124] > May 26 21:23:38 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:38.567074    2767 server.go:416] Version: v1.20.2
	I0526 21:25:14.242311  527485 command_runner.go:124] > May 26 21:23:38 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:38.569090    2767 server.go:837] Client rotation is on, will bootstrap in background
	I0526 21:25:14.242334  527485 command_runner.go:124] > May 26 21:23:38 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:38.580189    2767 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	I0526 21:25:14.242356  527485 command_runner.go:124] > May 26 21:23:38 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:38.581836    2767 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	I0526 21:25:14.242377  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.594567    2767 server.go:645] --cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /
	I0526 21:25:14.242398  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.596007    2767 container_manager_linux.go:274] container manager verified user specified cgroup-root exists: []
	I0526 21:25:14.242461  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.596173    2767 container_manager_linux.go:279] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:remote CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalTopologyManagerScope:container ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none}
	I0526 21:25:14.242494  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.596418    2767 topology_manager.go:120] [topologymanager] Creating topology manager with none policy per container scope
	I0526 21:25:14.242515  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.596689    2767 container_manager_linux.go:310] [topologymanager] Initializing Topology Manager with none policy and container-level scope
	I0526 21:25:14.242535  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.596801    2767 container_manager_linux.go:315] Creating device plugin manager: true
	I0526 21:25:14.242555  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.597107    2767 remote_runtime.go:62] parsed scheme: ""
	I0526 21:25:14.242576  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.597233    2767 remote_runtime.go:62] scheme "" not registered, fallback to default scheme
	I0526 21:25:14.242600  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.597387    2767 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}
	I0526 21:25:14.242619  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.597579    2767 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0526 21:25:14.242642  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.597846    2767 remote_image.go:50] parsed scheme: ""
	I0526 21:25:14.242662  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.597965    2767 remote_image.go:50] scheme "" not registered, fallback to default scheme
	I0526 21:25:14.242685  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.598781    2767 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}
	I0526 21:25:14.242705  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.598958    2767 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0526 21:25:14.242726  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.599605    2767 kubelet.go:262] Adding pod path: /etc/kubernetes/manifests
	I0526 21:25:14.242744  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.599963    2767 kubelet.go:273] Watching apiserver
	I0526 21:25:14.242769  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.629159    2767 kuberuntime_manager.go:216] Container runtime containerd initialized, version: v1.4.4, apiVersion: v1alpha2
	I0526 21:25:14.242793  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: E0526 21:23:43.914429    2767 aws_credentials.go:77] while getting AWS credentials NoCredentialProviders: no valid providers in chain. Deprecated.
	I0526 21:25:14.242812  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]:         For verbose messaging see aws.Config.CredentialsChainVerboseErrors
	I0526 21:25:14.242832  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.918059    2767 server.go:1176] Started kubelet
	I0526 21:25:14.242851  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.928363    2767 server.go:148] Starting to listen on 0.0.0.0:10250
	I0526 21:25:14.242871  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.931699    2767 server.go:410] Adding debug handlers to kubelet server.
	I0526 21:25:14.242891  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.943931    2767 fs_resource_analyzer.go:64] Starting FS ResourceAnalyzer
	I0526 21:25:14.242912  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.945256    2767 volume_manager.go:271] Starting Kubelet Volume Manager
	I0526 21:25:14.242938  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: E0526 21:23:43.949736    2767 kubelet.go:2163] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	I0526 21:25:14.242958  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.949953    2767 client.go:86] parsed scheme: "unix"
	I0526 21:25:14.242978  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.950079    2767 client.go:86] scheme "unix" not registered, fallback to default scheme
	I0526 21:25:14.243003  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.950244    2767 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}
	I0526 21:25:14.243024  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.950360    2767 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0526 21:25:14.243046  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.960536    2767 desired_state_of_world_populator.go:142] Desired state populator starts to run
	I0526 21:25:14.243068  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.047200    2767 kubelet_node_status.go:71] Attempting to register node multinode-20210526212238-510955
	I0526 21:25:14.243089  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.063101    2767 kubelet_node_status.go:109] Node multinode-20210526212238-510955 was previously registered
	I0526 21:25:14.243110  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.063585    2767 kubelet_node_status.go:74] Successfully registered node multinode-20210526212238-510955
	I0526 21:25:14.243130  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.100820    2767 kubelet_network_linux.go:56] Initialized IPv4 iptables rules.
	I0526 21:25:14.243154  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.100987    2767 status_manager.go:158] Starting to sync pod status with apiserver
	I0526 21:25:14.243173  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.101019    2767 kubelet.go:1802] Starting kubelet main sync loop.
	I0526 21:25:14.243198  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: E0526 21:23:44.101062    2767 kubelet.go:1826] skipping pod synchronization - [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
	I0526 21:25:14.243214  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.167420    2767 cpu_manager.go:193] [cpumanager] starting with none policy
	I0526 21:25:14.243225  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.167823    2767 cpu_manager.go:194] [cpumanager] reconciling every 10s
	I0526 21:25:14.243244  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.167963    2767 state_mem.go:36] [cpumanager] initializing new in-memory state store
	I0526 21:25:14.243268  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.168200    2767 state_mem.go:88] [cpumanager] updated default cpuset: ""
	I0526 21:25:14.243288  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.168317    2767 state_mem.go:96] [cpumanager] updated cpuset assignments: "map[]"
	I0526 21:25:14.243307  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.168438    2767 policy_none.go:43] [cpumanager] none policy: Start
	I0526 21:25:14.243326  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: W0526 21:23:44.170589    2767 manager.go:594] Failed to retrieve checkpoint for "kubelet_internal_checkpoint": checkpoint is not found
	I0526 21:25:14.243346  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.170973    2767 plugin_manager.go:114] Starting Kubelet Plugin Manager
	I0526 21:25:14.243366  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.201167    2767 topology_manager.go:187] [topologymanager] Topology Admit Handler
	I0526 21:25:14.243386  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.201423    2767 topology_manager.go:187] [topologymanager] Topology Admit Handler
	I0526 21:25:14.243406  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.202839    2767 topology_manager.go:187] [topologymanager] Topology Admit Handler
	I0526 21:25:14.243425  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.202968    2767 topology_manager.go:187] [topologymanager] Topology Admit Handler
	I0526 21:25:14.243459  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.349811    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/474c55dfb64741cc485e46b6bb9f2dc0-kubeconfig") pod "kube-controller-manager-multinode-20210526212238-510955" (UID: "474c55dfb64741cc485e46b6bb9f2dc0")
	I0526 21:25:14.243495  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.349855    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/474c55dfb64741cc485e46b6bb9f2dc0-usr-share-ca-certificates") pod "kube-controller-manager-multinode-20210526212238-510955" (UID: "474c55dfb64741cc485e46b6bb9f2dc0")
	I0526 21:25:14.243533  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.349894    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-certs" (UniqueName: "kubernetes.io/host-path/34530b4d5ce1b17919f3b8976b2d0456-etcd-certs") pod "etcd-multinode-20210526212238-510955" (UID: "34530b4d5ce1b17919f3b8976b2d0456")
	I0526 21:25:14.243567  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.349913    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/b42b6879229f245abab6047de8662a2f-ca-certs") pod "kube-apiserver-multinode-20210526212238-510955" (UID: "b42b6879229f245abab6047de8662a2f")
	I0526 21:25:14.243609  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.349921    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "flexvolume-dir" (UniqueName: "kubernetes.io/host-path/474c55dfb64741cc485e46b6bb9f2dc0-flexvolume-dir") pod "kube-controller-manager-multinode-20210526212238-510955" (UID: "474c55dfb64741cc485e46b6bb9f2dc0")
	I0526 21:25:14.243645  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.349921    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/474c55dfb64741cc485e46b6bb9f2dc0-ca-certs") pod "kube-controller-manager-multinode-20210526212238-510955" (UID: "474c55dfb64741cc485e46b6bb9f2dc0")
	I0526 21:25:14.243681  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.349955    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/474c55dfb64741cc485e46b6bb9f2dc0-k8s-certs") pod "kube-controller-manager-multinode-20210526212238-510955" (UID: "474c55dfb64741cc485e46b6bb9f2dc0")
	I0526 21:25:14.243714  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.349955    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/6b4a0ee8b3d15a1c2e47c15d32e6eb0d-kubeconfig") pod "kube-scheduler-multinode-20210526212238-510955" (UID: "6b4a0ee8b3d15a1c2e47c15d32e6eb0d")
	I0526 21:25:14.243746  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.349988    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-data" (UniqueName: "kubernetes.io/host-path/34530b4d5ce1b17919f3b8976b2d0456-etcd-data") pod "etcd-multinode-20210526212238-510955" (UID: "34530b4d5ce1b17919f3b8976b2d0456")
	I0526 21:25:14.243777  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.350013    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/b42b6879229f245abab6047de8662a2f-k8s-certs") pod "kube-apiserver-multinode-20210526212238-510955" (UID: "b42b6879229f245abab6047de8662a2f")
	I0526 21:25:14.243814  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.350027    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/b42b6879229f245abab6047de8662a2f-usr-share-ca-certificates") pod "kube-apiserver-multinode-20210526212238-510955" (UID: "b42b6879229f245abab6047de8662a2f")
	I0526 21:25:14.243834  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.350035    2767 reconciler.go:157] Reconciler: start to sync state
	I0526 21:25:14.243861  527485 command_runner.go:124] > May 26 21:23:49 multinode-20210526212238-510955 kubelet[2767]: E0526 21:23:49.171719    2767 kubelet.go:2163] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	I0526 21:25:14.243883  527485 command_runner.go:124] > May 26 21:23:53 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:53.286184    2767 kuberuntime_manager.go:1006] updating runtime config through cri with podcidr 10.244.0.0/24
	I0526 21:25:14.243903  527485 command_runner.go:124] > May 26 21:23:53 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:53.292064    2767 kubelet_network.go:77] Setting Pod CIDR:  -> 10.244.0.0/24
	I0526 21:25:14.243930  527485 command_runner.go:124] > May 26 21:23:53 multinode-20210526212238-510955 kubelet[2767]: E0526 21:23:53.297677    2767 kubelet.go:2163] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	I0526 21:25:14.243950  527485 command_runner.go:124] > May 26 21:23:53 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:53.473000    2767 topology_manager.go:187] [topologymanager] Topology Admit Handler
	I0526 21:25:14.243984  527485 command_runner.go:124] > May 26 21:23:53 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:53.588715    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "cni-cfg" (UniqueName: "kubernetes.io/host-path/aac3ff91-8f9c-4f4e-81fc-a859f780d67d-cni-cfg") pod "kindnet-2wgbs" (UID: "aac3ff91-8f9c-4f4e-81fc-a859f780d67d")
	I0526 21:25:14.244021  527485 command_runner.go:124] > May 26 21:23:53 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:53.589055    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/aac3ff91-8f9c-4f4e-81fc-a859f780d67d-xtables-lock") pod "kindnet-2wgbs" (UID: "aac3ff91-8f9c-4f4e-81fc-a859f780d67d")
	I0526 21:25:14.244056  527485 command_runner.go:124] > May 26 21:23:53 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:53.589618    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kindnet-token-zm2kt" (UniqueName: "kubernetes.io/secret/aac3ff91-8f9c-4f4e-81fc-a859f780d67d-kindnet-token-zm2kt") pod "kindnet-2wgbs" (UID: "aac3ff91-8f9c-4f4e-81fc-a859f780d67d")
	I0526 21:25:14.244089  527485 command_runner.go:124] > May 26 21:23:53 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:53.589842    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/aac3ff91-8f9c-4f4e-81fc-a859f780d67d-lib-modules") pod "kindnet-2wgbs" (UID: "aac3ff91-8f9c-4f4e-81fc-a859f780d67d")
	I0526 21:25:14.244111  527485 command_runner.go:124] > May 26 21:23:53 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:53.611915    2767 topology_manager.go:187] [topologymanager] Topology Admit Handler
	I0526 21:25:14.244144  527485 command_runner.go:124] > May 26 21:23:53 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:53.791552    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/950a915d-c5f0-4e6f-bc12-ee97013032f0-kube-proxy") pod "kube-proxy-qbl42" (UID: "950a915d-c5f0-4e6f-bc12-ee97013032f0")
	I0526 21:25:14.244177  527485 command_runner.go:124] > May 26 21:23:53 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:53.791755    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/950a915d-c5f0-4e6f-bc12-ee97013032f0-lib-modules") pod "kube-proxy-qbl42" (UID: "950a915d-c5f0-4e6f-bc12-ee97013032f0")
	I0526 21:25:14.244210  527485 command_runner.go:124] > May 26 21:23:53 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:53.791904    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy-token-xd4p4" (UniqueName: "kubernetes.io/secret/950a915d-c5f0-4e6f-bc12-ee97013032f0-kube-proxy-token-xd4p4") pod "kube-proxy-qbl42" (UID: "950a915d-c5f0-4e6f-bc12-ee97013032f0")
	I0526 21:25:14.244242  527485 command_runner.go:124] > May 26 21:23:53 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:53.792035    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/950a915d-c5f0-4e6f-bc12-ee97013032f0-xtables-lock") pod "kube-proxy-qbl42" (UID: "950a915d-c5f0-4e6f-bc12-ee97013032f0")
	I0526 21:25:14.244274  527485 command_runner.go:124] > May 26 21:23:54 multinode-20210526212238-510955 kubelet[2767]: E0526 21:23:54.172944    2767 kubelet.go:2163] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	I0526 21:25:14.244307  527485 command_runner.go:124] > May 26 21:23:56 multinode-20210526212238-510955 kubelet[2767]: E0526 21:23:56.623072    2767 cadvisor_stats_provider.go:401] Partial failure issuing cadvisor.ContainerInfoV2: partial failures: ["/kubepods/besteffort/pod950a915d-c5f0-4e6f-bc12-ee97013032f0/de6efc6fec4b20903efb849817a9e57e284841a9fc6ef9bd1b83e563316cdaa2": RecentStats: unable to find data in memory cache]
	I0526 21:25:14.244328  527485 command_runner.go:124] > May 26 21:24:08 multinode-20210526212238-510955 kubelet[2767]: I0526 21:24:08.993599    2767 topology_manager.go:187] [topologymanager] Topology Admit Handler
	I0526 21:25:14.244349  527485 command_runner.go:124] > May 26 21:24:09 multinode-20210526212238-510955 kubelet[2767]: I0526 21:24:09.010021    2767 topology_manager.go:187] [topologymanager] Topology Admit Handler
	I0526 21:25:14.244381  527485 command_runner.go:124] > May 26 21:24:09 multinode-20210526212238-510955 kubelet[2767]: I0526 21:24:09.159693    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp" (UniqueName: "kubernetes.io/host-path/e29f2487-1ac5-4bb4-9ab8-6b5d5c43ec36-tmp") pod "storage-provisioner" (UID: "e29f2487-1ac5-4bb4-9ab8-6b5d5c43ec36")
	I0526 21:25:14.244417  527485 command_runner.go:124] > May 26 21:24:09 multinode-20210526212238-510955 kubelet[2767]: I0526 21:24:09.159808    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-7ps8h" (UniqueName: "kubernetes.io/secret/a0522c32-9960-4c21-8a5a-d0b137009166-coredns-token-7ps8h") pod "coredns-74ff55c5b-tw67b" (UID: "a0522c32-9960-4c21-8a5a-d0b137009166")
	I0526 21:25:14.244451  527485 command_runner.go:124] > May 26 21:24:09 multinode-20210526212238-510955 kubelet[2767]: I0526 21:24:09.159830    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/a0522c32-9960-4c21-8a5a-d0b137009166-config-volume") pod "coredns-74ff55c5b-tw67b" (UID: "a0522c32-9960-4c21-8a5a-d0b137009166")
	I0526 21:25:14.244485  527485 command_runner.go:124] > May 26 21:24:09 multinode-20210526212238-510955 kubelet[2767]: I0526 21:24:09.159848    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "storage-provisioner-token-hgxxq" (UniqueName: "kubernetes.io/secret/e29f2487-1ac5-4bb4-9ab8-6b5d5c43ec36-storage-provisioner-token-hgxxq") pod "storage-provisioner" (UID: "e29f2487-1ac5-4bb4-9ab8-6b5d5c43ec36")
	I0526 21:25:16.777174  527485 api_server.go:223] Checking apiserver healthz at https://192.168.39.229:8443/healthz ...
	I0526 21:25:16.786618  527485 api_server.go:249] https://192.168.39.229:8443/healthz returned 200:
	ok
	I0526 21:25:16.786691  527485 round_trippers.go:422] GET https://192.168.39.229:8443/version?timeout=32s
	I0526 21:25:16.786700  527485 round_trippers.go:429] Request Headers:
	I0526 21:25:16.786705  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:25:16.786709  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:25:16.787858  527485 round_trippers.go:448] Response Status: 200 OK in 1 milliseconds
	I0526 21:25:16.787878  527485 round_trippers.go:451] Response Headers:
	I0526 21:25:16.787883  527485 round_trippers.go:454]     Content-Length: 263
	I0526 21:25:16.787888  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:25:16 GMT
	I0526 21:25:16.787893  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:25:16.787897  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:25:16.787906  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:25:16.787913  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:25:16.787945  527485 request.go:1107] Response Body: {
	  "major": "1",
	  "minor": "20",
	  "gitVersion": "v1.20.2",
	  "gitCommit": "faecb196815e248d3ecfb03c680a4507229c2a56",
	  "gitTreeState": "clean",
	  "buildDate": "2021-01-13T13:20:00Z",
	  "goVersion": "go1.15.5",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0526 21:25:16.788058  527485 api_server.go:139] control plane version: v1.20.2
	I0526 21:25:16.788076  527485 api_server.go:129] duration metric: took 3.203268839s to wait for apiserver health ...
	I0526 21:25:16.788086  527485 system_pods.go:43] waiting for kube-system pods to appear ...
	I0526 21:25:16.788110  527485 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0526 21:25:16.788165  527485 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0526 21:25:16.807484  527485 command_runner.go:124] > a0581c0e5409bf38a71c61d4cf776a7fcd3dc38c24a5a6a52165552afd2bc85c
	I0526 21:25:16.808417  527485 cri.go:76] found id: "a0581c0e5409bf38a71c61d4cf776a7fcd3dc38c24a5a6a52165552afd2bc85c"
	I0526 21:25:16.808439  527485 cri.go:76] found id: ""
	I0526 21:25:16.808445  527485 logs.go:270] 1 containers: [a0581c0e5409bf38a71c61d4cf776a7fcd3dc38c24a5a6a52165552afd2bc85c]
	I0526 21:25:16.808484  527485 ssh_runner.go:149] Run: which crictl
	I0526 21:25:16.812497  527485 command_runner.go:124] > /bin/crictl
	I0526 21:25:16.812957  527485 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0526 21:25:16.813019  527485 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=etcd
	I0526 21:25:16.842444  527485 command_runner.go:124] > c8538106e966bebc2d596c1688fdb11c413a37e122bd18c454b4db5bec7f55ad
	I0526 21:25:16.842473  527485 cri.go:76] found id: "c8538106e966bebc2d596c1688fdb11c413a37e122bd18c454b4db5bec7f55ad"
	I0526 21:25:16.842480  527485 cri.go:76] found id: ""
	I0526 21:25:16.842485  527485 logs.go:270] 1 containers: [c8538106e966bebc2d596c1688fdb11c413a37e122bd18c454b4db5bec7f55ad]
	I0526 21:25:16.842519  527485 ssh_runner.go:149] Run: which crictl
	I0526 21:25:16.849048  527485 command_runner.go:124] > /bin/crictl
	I0526 21:25:16.849082  527485 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0526 21:25:16.849117  527485 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=coredns
	I0526 21:25:16.872050  527485 command_runner.go:124] > a9593dff4428d4d0f5cc31832cefb42bf39b62a98d32d295dd9502a1ef6d307a
	I0526 21:25:16.872680  527485 cri.go:76] found id: "a9593dff4428d4d0f5cc31832cefb42bf39b62a98d32d295dd9502a1ef6d307a"
	I0526 21:25:16.872698  527485 cri.go:76] found id: ""
	I0526 21:25:16.872705  527485 logs.go:270] 1 containers: [a9593dff4428d4d0f5cc31832cefb42bf39b62a98d32d295dd9502a1ef6d307a]
	I0526 21:25:16.872751  527485 ssh_runner.go:149] Run: which crictl
	I0526 21:25:16.876656  527485 command_runner.go:124] > /bin/crictl
	I0526 21:25:16.876834  527485 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0526 21:25:16.876895  527485 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0526 21:25:16.892544  527485 command_runner.go:124] > e6bb9bee7539aa8f41eb665706dd077c17b8c28a54090cf4fffe439978194c08
	I0526 21:25:16.893901  527485 cri.go:76] found id: "e6bb9bee7539aa8f41eb665706dd077c17b8c28a54090cf4fffe439978194c08"
	I0526 21:25:16.893914  527485 cri.go:76] found id: ""
	I0526 21:25:16.893919  527485 logs.go:270] 1 containers: [e6bb9bee7539aa8f41eb665706dd077c17b8c28a54090cf4fffe439978194c08]
	I0526 21:25:16.893946  527485 ssh_runner.go:149] Run: which crictl
	I0526 21:25:16.897825  527485 command_runner.go:124] > /bin/crictl
	I0526 21:25:16.898071  527485 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0526 21:25:16.898107  527485 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0526 21:25:16.919930  527485 command_runner.go:124] > de6efc6fec4b20903efb849817a9e57e284841a9fc6ef9bd1b83e563316cdaa2
	I0526 21:25:16.922816  527485 cri.go:76] found id: "de6efc6fec4b20903efb849817a9e57e284841a9fc6ef9bd1b83e563316cdaa2"
	I0526 21:25:16.922830  527485 cri.go:76] found id: ""
	I0526 21:25:16.922834  527485 logs.go:270] 1 containers: [de6efc6fec4b20903efb849817a9e57e284841a9fc6ef9bd1b83e563316cdaa2]
	I0526 21:25:16.922862  527485 ssh_runner.go:149] Run: which crictl
	I0526 21:25:16.927749  527485 command_runner.go:124] > /bin/crictl
	I0526 21:25:16.927776  527485 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0526 21:25:16.927805  527485 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0526 21:25:16.946206  527485 cri.go:76] found id: ""
	I0526 21:25:16.946219  527485 logs.go:270] 0 containers: []
	W0526 21:25:16.946223  527485 logs.go:272] No container was found matching "kubernetes-dashboard"
	I0526 21:25:16.946228  527485 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0526 21:25:16.946261  527485 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0526 21:25:16.966964  527485 command_runner.go:124] > 5d3df8c94eaedafbc7bb1e79012cbc72bc2ec88687f4bfc6f10fff8c1eb3107d
	I0526 21:25:16.967146  527485 cri.go:76] found id: "5d3df8c94eaedafbc7bb1e79012cbc72bc2ec88687f4bfc6f10fff8c1eb3107d"
	I0526 21:25:16.967159  527485 cri.go:76] found id: ""
	I0526 21:25:16.967163  527485 logs.go:270] 1 containers: [5d3df8c94eaedafbc7bb1e79012cbc72bc2ec88687f4bfc6f10fff8c1eb3107d]
	I0526 21:25:16.967188  527485 ssh_runner.go:149] Run: which crictl
	I0526 21:25:16.970708  527485 command_runner.go:124] > /bin/crictl
	I0526 21:25:16.970734  527485 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0526 21:25:16.970763  527485 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0526 21:25:16.988710  527485 command_runner.go:124] > 2314e41b1b44395e7b5a7c32d6859941d4a5db601171f0dd18ddefb571d25f18
	I0526 21:25:16.989620  527485 cri.go:76] found id: "2314e41b1b44395e7b5a7c32d6859941d4a5db601171f0dd18ddefb571d25f18"
	I0526 21:25:16.989633  527485 cri.go:76] found id: ""
	I0526 21:25:16.989637  527485 logs.go:270] 1 containers: [2314e41b1b44395e7b5a7c32d6859941d4a5db601171f0dd18ddefb571d25f18]
	I0526 21:25:16.989660  527485 ssh_runner.go:149] Run: which crictl
	I0526 21:25:16.994092  527485 command_runner.go:124] > /bin/crictl
	I0526 21:25:16.994233  527485 logs.go:123] Gathering logs for kube-proxy [de6efc6fec4b20903efb849817a9e57e284841a9fc6ef9bd1b83e563316cdaa2] ...
	I0526 21:25:16.994245  527485 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 de6efc6fec4b20903efb849817a9e57e284841a9fc6ef9bd1b83e563316cdaa2"
	I0526 21:25:17.011911  527485 command_runner.go:124] ! I0526 21:23:54.629702       1 node.go:172] Successfully retrieved node IP: 192.168.39.229
	I0526 21:25:17.011987  527485 command_runner.go:124] ! I0526 21:23:54.629813       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.39.229), assume IPv4 operation
	I0526 21:25:17.012013  527485 command_runner.go:124] ! W0526 21:23:54.677087       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0526 21:25:17.012024  527485 command_runner.go:124] ! I0526 21:23:54.677377       1 server_others.go:185] Using iptables Proxier.
	I0526 21:25:17.012032  527485 command_runner.go:124] ! I0526 21:23:54.678139       1 server.go:650] Version: v1.20.2
	I0526 21:25:17.012049  527485 command_runner.go:124] ! I0526 21:23:54.678560       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
	I0526 21:25:17.012065  527485 command_runner.go:124] ! I0526 21:23:54.678810       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
	I0526 21:25:17.012078  527485 command_runner.go:124] ! I0526 21:23:54.680271       1 config.go:315] Starting service config controller
	I0526 21:25:17.012093  527485 command_runner.go:124] ! I0526 21:23:54.680366       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0526 21:25:17.012106  527485 command_runner.go:124] ! I0526 21:23:54.680391       1 config.go:224] Starting endpoint slice config controller
	I0526 21:25:17.012122  527485 command_runner.go:124] ! I0526 21:23:54.680396       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0526 21:25:17.012137  527485 command_runner.go:124] ! I0526 21:23:54.780835       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0526 21:25:17.012151  527485 command_runner.go:124] ! I0526 21:23:54.780955       1 shared_informer.go:247] Caches are synced for service config 
	I0526 21:25:17.013510  527485 logs.go:123] Gathering logs for storage-provisioner [5d3df8c94eaedafbc7bb1e79012cbc72bc2ec88687f4bfc6f10fff8c1eb3107d] ...
	I0526 21:25:17.013527  527485 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 5d3df8c94eaedafbc7bb1e79012cbc72bc2ec88687f4bfc6f10fff8c1eb3107d"
	I0526 21:25:17.033906  527485 command_runner.go:124] ! I0526 21:24:10.174152       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0526 21:25:17.034005  527485 command_runner.go:124] ! I0526 21:24:10.283423       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0526 21:25:17.034391  527485 command_runner.go:124] ! I0526 21:24:10.285296       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0526 21:25:17.034643  527485 command_runner.go:124] ! I0526 21:24:10.325709       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0526 21:25:17.034740  527485 command_runner.go:124] ! I0526 21:24:10.333080       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_multinode-20210526212238-510955_640f1575-3f2b-423b-9f51-48a3198dc1b4!
	I0526 21:25:17.035233  527485 command_runner.go:124] ! I0526 21:24:10.329407       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"694e5be2-46cf-4c76-aeac-70628468e6a3", APIVersion:"v1", ResourceVersion:"496", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' multinode-20210526212238-510955_640f1575-3f2b-423b-9f51-48a3198dc1b4 became leader
	I0526 21:25:17.035495  527485 command_runner.go:124] ! I0526 21:24:10.440994       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_multinode-20210526212238-510955_640f1575-3f2b-423b-9f51-48a3198dc1b4!
	I0526 21:25:17.037922  527485 logs.go:123] Gathering logs for kubelet ...
	I0526 21:25:17.037940  527485 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0526 21:25:17.051460  527485 command_runner.go:124] > -- Logs begin at Wed 2021-05-26 21:22:49 UTC, end at Wed 2021-05-26 21:25:17 UTC. --
	I0526 21:25:17.051482  527485 command_runner.go:124] > May 26 21:23:21 multinode-20210526212238-510955 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0526 21:25:17.051506  527485 command_runner.go:124] > May 26 21:23:21 multinode-20210526212238-510955 kubelet[2343]: Flag --runtime-request-timeout has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	I0526 21:25:17.051552  527485 command_runner.go:124] > May 26 21:23:21 multinode-20210526212238-510955 kubelet[2343]: Flag --runtime-request-timeout has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	I0526 21:25:17.051570  527485 command_runner.go:124] > May 26 21:23:22 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:22.365155    2343 server.go:416] Version: v1.20.2
	I0526 21:25:17.051596  527485 command_runner.go:124] > May 26 21:23:22 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:22.365664    2343 server.go:837] Client rotation is on, will bootstrap in background
	I0526 21:25:17.051619  527485 command_runner.go:124] > May 26 21:23:22 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:22.382328    2343 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	I0526 21:25:17.051656  527485 command_runner.go:124] > May 26 21:23:22 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:22.383887    2343 certificate_manager.go:437] Failed while requesting a signed certificate from the master: cannot create certificate signing request: Post "https://control-plane.minikube.internal:8443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:17.051689  527485 command_runner.go:124] > May 26 21:23:24 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:24.586559    2343 certificate_manager.go:437] Failed while requesting a signed certificate from the master: cannot create certificate signing request: Post "https://control-plane.minikube.internal:8443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:17.051718  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.392858    2343 server.go:645] --cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /
	I0526 21:25:17.051742  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.393993    2343 container_manager_linux.go:274] container manager verified user specified cgroup-root exists: []
	I0526 21:25:17.051815  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.394298    2343 container_manager_linux.go:279] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:remote CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalTopologyManagerScope:container ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none}
	I0526 21:25:17.051842  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.395126    2343 topology_manager.go:120] [topologymanager] Creating topology manager with none policy per container scope
	I0526 21:25:17.052098  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.395334    2343 container_manager_linux.go:310] [topologymanager] Initializing Topology Manager with none policy and container-level scope
	I0526 21:25:17.052118  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.395348    2343 container_manager_linux.go:315] Creating device plugin manager: true
	I0526 21:25:17.052128  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.395816    2343 remote_runtime.go:62] parsed scheme: ""
	I0526 21:25:17.052143  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.395929    2343 remote_runtime.go:62] scheme "" not registered, fallback to default scheme
	I0526 21:25:17.052165  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.396315    2343 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}
	I0526 21:25:17.052184  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.396571    2343 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0526 21:25:17.052195  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.397666    2343 remote_image.go:50] parsed scheme: ""
	I0526 21:25:17.052209  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.397691    2343 remote_image.go:50] scheme "" not registered, fallback to default scheme
	I0526 21:25:17.052224  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.397829    2343 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}
	I0526 21:25:17.052239  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.397957    2343 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0526 21:25:17.052259  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.400786    2343 kubelet.go:262] Adding pod path: /etc/kubernetes/manifests
	I0526 21:25:17.052278  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.401761    2343 kubelet.go:273] Watching apiserver
	I0526 21:25:17.052311  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:27.419726    2343 reflector.go:138] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://control-plane.minikube.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%!D(MISSING)multinode-20210526212238-510955&limit=500&resourceVersion=0": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:17.052346  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:27.433343    2343 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:17.052370  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.434846    2343 kuberuntime_manager.go:216] Container runtime containerd initialized, version: v1.4.4, apiVersion: v1alpha2
	I0526 21:25:17.052408  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:27.435179    2343 reflector.go:138] k8s.io/kubernetes/pkg/kubelet/kubelet.go:438: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-20210526212238-510955&limit=500&resourceVersion=0": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:17.052430  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:27.695431    2343 aws_credentials.go:77] while getting AWS credentials NoCredentialProviders: no valid providers in chain. Deprecated.
	I0526 21:25:17.052443  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]:         For verbose messaging see aws.Config.CredentialsChainVerboseErrors
	I0526 21:25:17.052459  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: W0526 21:23:27.696850    2343 probe.go:268] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
	I0526 21:25:17.052480  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.698714    2343 server.go:1176] Started kubelet
	I0526 21:25:17.052500  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.699681    2343 server.go:148] Starting to listen on 0.0.0.0:10250
	I0526 21:25:17.052521  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.701131    2343 server.go:410] Adding debug handlers to kubelet server.
	I0526 21:25:17.052640  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:27.701698    2343 event.go:273] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"multinode-20210526212238-510955.1682bacd86c17a5a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"multinode-20210526212238-510955", UID:"multinode-20210526212238-510955", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.",
Source:v1.EventSource{Component:"kubelet", Host:"multinode-20210526212238-510955"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc023ccf3e9a5245a, ext:5868438524, loc:(*time.Location)(0x70d1080)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc023ccf3e9a5245a, ext:5868438524, loc:(*time.Location)(0x70d1080)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events": dial tcp 192.168.39.229:8443: connect: connection refused'(may retry after sleeping)
	I0526 21:25:17.052660  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.703923    2343 fs_resource_analyzer.go:64] Starting FS ResourceAnalyzer
	I0526 21:25:17.052677  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.707734    2343 volume_manager.go:271] Starting Kubelet Volume Manager
	I0526 21:25:17.052699  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.708096    2343 desired_state_of_world_populator.go:142] Desired state populator starts to run
	I0526 21:25:17.052736  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:27.708889    2343 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:17.052779  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:27.709701    2343 controller.go:144] failed to ensure lease exists, will retry in 200ms, error: Get "https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-20210526212238-510955?timeout=10s": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:17.052809  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:27.711040    2343 kubelet.go:2163] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	I0526 21:25:17.052824  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.711583    2343 client.go:86] parsed scheme: "unix"
	I0526 21:25:17.052846  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.711779    2343 client.go:86] scheme "unix" not registered, fallback to default scheme
	I0526 21:25:17.052887  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.712280    2343 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}
	I0526 21:25:17.052907  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.712673    2343 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0526 21:25:17.052936  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.782226    2343 cpu_manager.go:193] [cpumanager] starting with none policy
	I0526 21:25:17.052957  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.782318    2343 cpu_manager.go:194] [cpumanager] reconciling every 10s
	I0526 21:25:17.052979  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.782638    2343 state_mem.go:36] [cpumanager] initializing new in-memory state store
	I0526 21:25:17.053062  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:27.799125    2343 event.go:273] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"multinode-20210526212238-510955.1682bacd86c17a5a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"multinode-20210526212238-510955", UID:"multinode-20210526212238-510955", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.",
Source:v1.EventSource{Component:"kubelet", Host:"multinode-20210526212238-510955"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc023ccf3e9a5245a, ext:5868438524, loc:(*time.Location)(0x70d1080)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc023ccf3e9a5245a, ext:5868438524, loc:(*time.Location)(0x70d1080)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events": dial tcp 192.168.39.229:8443: connect: connection refused'(may retry after sleeping)
	I0526 21:25:17.053078  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.809183    2343 kubelet_node_status.go:71] Attempting to register node multinode-20210526212238-510955
	I0526 21:25:17.053097  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:27.810505    2343 kubelet_node_status.go:93] Unable to register node "multinode-20210526212238-510955" with API server: Post "https://control-plane.minikube.internal:8443/api/v1/nodes": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:17.053116  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:27.810636    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:17.053130  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.876097    2343 kubelet_network_linux.go:56] Initialized IPv4 iptables rules.
	I0526 21:25:17.053143  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.876127    2343 status_manager.go:158] Starting to sync pod status with apiserver
	I0526 21:25:17.053155  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.876145    2343 kubelet.go:1802] Starting kubelet main sync loop.
	I0526 21:25:17.053173  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:27.876191    2343 kubelet.go:1826] skipping pod synchronization - [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
	I0526 21:25:17.053197  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:27.877853    2343 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:17.053223  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:27.910604    2343 controller.go:144] failed to ensure lease exists, will retry in 400ms, error: Get "https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-20210526212238-510955?timeout=10s": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:17.053237  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:27.910787    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:17.053253  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:27.976408    2343 kubelet.go:1826] skipping pod synchronization - container runtime status check may not have completed yet
	I0526 21:25:17.053267  527485 command_runner.go:124] > May 26 21:23:27 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:27.987845    2343 policy_none.go:43] [cpumanager] none policy: Start
	I0526 21:25:17.053298  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: W0526 21:23:28.000709    2343 manager.go:594] Failed to retrieve checkpoint for "kubelet_internal_checkpoint": checkpoint is not found
	I0526 21:25:17.053312  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:28.001042    2343 plugin_manager.go:114] Starting Kubelet Plugin Manager
	I0526 21:25:17.053331  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:28.004395    2343 eviction_manager.go:260] eviction manager: failed to get summary stats: failed to get node info: node "multinode-20210526212238-510955" not found
	I0526 21:25:17.053347  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:28.010900    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:17.053362  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:28.011906    2343 kubelet_node_status.go:71] Attempting to register node multinode-20210526212238-510955
	I0526 21:25:17.053382  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:28.012281    2343 kubelet_node_status.go:93] Unable to register node "multinode-20210526212238-510955" with API server: Post "https://control-plane.minikube.internal:8443/api/v1/nodes": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:17.053395  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:28.111839    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:17.053409  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:28.177382    2343 topology_manager.go:187] [topologymanager] Topology Admit Handler
	I0526 21:25:17.053422  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:28.180087    2343 topology_manager.go:187] [topologymanager] Topology Admit Handler
	I0526 21:25:17.053434  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:28.181373    2343 topology_manager.go:187] [topologymanager] Topology Admit Handler
	I0526 21:25:17.053448  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:28.182941    2343 topology_manager.go:187] [topologymanager] Topology Admit Handler
	I0526 21:25:17.053474  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: W0526 21:23:28.185069    2343 status_manager.go:550] Failed to get status for pod "kube-controller-manager-multinode-20210526212238-510955_kube-system(474c55dfb64741cc485e46b6bb9f2dc0)": Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20210526212238-510955": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:17.053499  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: W0526 21:23:28.185417    2343 status_manager.go:550] Failed to get status for pod "kube-scheduler-multinode-20210526212238-510955_kube-system(6b4a0ee8b3d15a1c2e47c15d32e6eb0d)": Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-20210526212238-510955": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:17.053524  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: W0526 21:23:28.201047    2343 status_manager.go:550] Failed to get status for pod "kube-apiserver-multinode-20210526212238-510955_kube-system(b42b6879229f245abab6047de8662a2f)": Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-20210526212238-510955": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:17.053549  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: W0526 21:23:28.202364    2343 status_manager.go:550] Failed to get status for pod "etcd-multinode-20210526212238-510955_kube-system(34530b4d5ce1b17919f3b8976b2d0456)": Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:17.053566  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:28.212215    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:17.053588  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:28.309602    2343 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/b42b6879229f245abab6047de8662a2f-ca-certs") pod "kube-apiserver-multinode-20210526212238-510955" (UID: "b42b6879229f245abab6047de8662a2f")
	I0526 21:25:17.053610  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:28.309839    2343 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/b42b6879229f245abab6047de8662a2f-k8s-certs") pod "kube-apiserver-multinode-20210526212238-510955" (UID: "b42b6879229f245abab6047de8662a2f")
	I0526 21:25:17.053636  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:28.310062    2343 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/b42b6879229f245abab6047de8662a2f-usr-share-ca-certificates") pod "kube-apiserver-multinode-20210526212238-510955" (UID: "b42b6879229f245abab6047de8662a2f")
	I0526 21:25:17.053659  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:28.310275    2343 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/474c55dfb64741cc485e46b6bb9f2dc0-ca-certs") pod "kube-controller-manager-multinode-20210526212238-510955" (UID: "474c55dfb64741cc485e46b6bb9f2dc0")
	I0526 21:25:17.053687  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:28.310572    2343 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/474c55dfb64741cc485e46b6bb9f2dc0-k8s-certs") pod "kube-controller-manager-multinode-20210526212238-510955" (UID: "474c55dfb64741cc485e46b6bb9f2dc0")
	I0526 21:25:17.053710  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:28.310900    2343 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/474c55dfb64741cc485e46b6bb9f2dc0-kubeconfig") pod "kube-controller-manager-multinode-20210526212238-510955" (UID: "474c55dfb64741cc485e46b6bb9f2dc0")
	I0526 21:25:17.053732  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:28.311066    2343 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-certs" (UniqueName: "kubernetes.io/host-path/34530b4d5ce1b17919f3b8976b2d0456-etcd-certs") pod "etcd-multinode-20210526212238-510955" (UID: "34530b4d5ce1b17919f3b8976b2d0456")
	I0526 21:25:17.053755  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:28.311200    2343 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "flexvolume-dir" (UniqueName: "kubernetes.io/host-path/474c55dfb64741cc485e46b6bb9f2dc0-flexvolume-dir") pod "kube-controller-manager-multinode-20210526212238-510955" (UID: "474c55dfb64741cc485e46b6bb9f2dc0")
	I0526 21:25:17.053783  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:28.311326    2343 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/474c55dfb64741cc485e46b6bb9f2dc0-usr-share-ca-certificates") pod "kube-controller-manager-multinode-20210526212238-510955" (UID: "474c55dfb64741cc485e46b6bb9f2dc0")
	I0526 21:25:17.053809  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:28.311324    2343 controller.go:144] failed to ensure lease exists, will retry in 800ms, error: Get "https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-20210526212238-510955?timeout=10s": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:17.053833  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:28.311643    2343 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/6b4a0ee8b3d15a1c2e47c15d32e6eb0d-kubeconfig") pod "kube-scheduler-multinode-20210526212238-510955" (UID: "6b4a0ee8b3d15a1c2e47c15d32e6eb0d")
	I0526 21:25:17.053855  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:28.311955    2343 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-data" (UniqueName: "kubernetes.io/host-path/34530b4d5ce1b17919f3b8976b2d0456-etcd-data") pod "etcd-multinode-20210526212238-510955" (UID: "34530b4d5ce1b17919f3b8976b2d0456")
	I0526 21:25:17.053869  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:28.312763    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:17.053894  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:28.318006    2343 reflector.go:138] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://control-plane.minikube.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%!D(MISSING)multinode-20210526212238-510955&limit=500&resourceVersion=0": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:17.053919  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:28.361617    2343 reflector.go:138] k8s.io/kubernetes/pkg/kubelet/kubelet.go:438: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-20210526212238-510955&limit=500&resourceVersion=0": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:17.053932  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:28.412938    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:17.053946  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:28.414299    2343 kubelet_node_status.go:71] Attempting to register node multinode-20210526212238-510955
	I0526 21:25:17.053967  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:28.420140    2343 kubelet_node_status.go:93] Unable to register node "multinode-20210526212238-510955" with API server: Post "https://control-plane.minikube.internal:8443/api/v1/nodes": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:17.053982  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:28.513925    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:17.053995  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:28.614235    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:17.054019  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:28.620010    2343 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:17.054032  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:28.714407    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:17.054056  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:28.717664    2343 certificate_manager.go:437] Failed while requesting a signed certificate from the master: cannot create certificate signing request: Post "https://control-plane.minikube.internal:8443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:17.054069  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:28.815037    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:17.054092  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:28.819848    2343 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:17.054105  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:28.915364    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:17.054119  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:29.015843    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:17.054143  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:29.112804    2343 controller.go:144] failed to ensure lease exists, will retry in 1.6s, error: Get "https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-20210526212238-510955?timeout=10s": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:17.054157  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:29.116234    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:17.054170  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:29.217167    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:17.054195  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:29.219890    2343 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:17.054209  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:29.223096    2343 kubelet_node_status.go:71] Attempting to register node multinode-20210526212238-510955
	I0526 21:25:17.054221  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:29.317528    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:17.054235  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:29.418231    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:17.054255  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:29.419707    2343 kubelet_node_status.go:93] Unable to register node "multinode-20210526212238-510955" with API server: Post "https://control-plane.minikube.internal:8443/api/v1/nodes": dial tcp 192.168.39.229:8443: connect: connection refused
	I0526 21:25:17.054272  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:29.520018    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:17.054287  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:29.620736    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:17.054300  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:29.721115    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:17.054312  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:29.821411    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:17.054325  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:29.921772    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:17.054338  527485 command_runner.go:124] > May 26 21:23:30 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:30.022147    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:17.054352  527485 command_runner.go:124] > May 26 21:23:30 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:30.122970    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:17.054365  527485 command_runner.go:124] > May 26 21:23:30 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:30.223407    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:17.054378  527485 command_runner.go:124] > May 26 21:23:30 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:30.323609    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:17.054391  527485 command_runner.go:124] > May 26 21:23:30 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:30.424033    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:17.054403  527485 command_runner.go:124] > May 26 21:23:30 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:30.524613    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:17.054416  527485 command_runner.go:124] > May 26 21:23:30 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:30.625186    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:17.054429  527485 command_runner.go:124] > May 26 21:23:30 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:30.725563    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:17.054445  527485 command_runner.go:124] > May 26 21:23:30 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:30.826076    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:17.054458  527485 command_runner.go:124] > May 26 21:23:30 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:30.932677    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:17.054472  527485 command_runner.go:124] > May 26 21:23:31 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:31.021296    2343 kubelet_node_status.go:71] Attempting to register node multinode-20210526212238-510955
	I0526 21:25:17.054525  527485 command_runner.go:124] > May 26 21:23:31 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:31.033185    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:17.054539  527485 command_runner.go:124] > May 26 21:23:31 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:31.133540    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:17.054554  527485 command_runner.go:124] > May 26 21:23:31 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:31.234158    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:17.054567  527485 command_runner.go:124] > May 26 21:23:31 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:31.334934    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:17.054581  527485 command_runner.go:124] > May 26 21:23:31 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:31.435265    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:17.054592  527485 command_runner.go:124] > May 26 21:23:31 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:31.535646    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:17.054605  527485 command_runner.go:124] > May 26 21:23:31 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:31.636091    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:17.054618  527485 command_runner.go:124] > May 26 21:23:31 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:31.736769    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:17.054632  527485 command_runner.go:124] > May 26 21:23:31 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:31.837337    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:17.054644  527485 command_runner.go:124] > May 26 21:23:31 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:31.937851    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:17.054658  527485 command_runner.go:124] > May 26 21:23:32 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:32.038171    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:17.054670  527485 command_runner.go:124] > May 26 21:23:32 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:32.138719    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:17.054683  527485 command_runner.go:124] > May 26 21:23:32 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:32.239058    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:17.054696  527485 command_runner.go:124] > May 26 21:23:32 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:32.339598    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:17.054711  527485 command_runner.go:124] > May 26 21:23:32 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:32.440290    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:17.054724  527485 command_runner.go:124] > May 26 21:23:32 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:32.540624    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:17.054737  527485 command_runner.go:124] > May 26 21:23:32 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:32.641006    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:17.054750  527485 command_runner.go:124] > May 26 21:23:32 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:32.741403    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:17.054767  527485 command_runner.go:124] > May 26 21:23:32 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:32.841966    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:17.054781  527485 command_runner.go:124] > May 26 21:23:32 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:32.942585    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:17.054797  527485 command_runner.go:124] > May 26 21:23:33 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:33.002095    2343 kubelet.go:2163] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	I0526 21:25:17.054810  527485 command_runner.go:124] > May 26 21:23:33 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:33.042747    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:17.054825  527485 command_runner.go:124] > May 26 21:23:33 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:33.142869    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:17.054839  527485 command_runner.go:124] > May 26 21:23:33 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:33.243254    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:17.054852  527485 command_runner.go:124] > May 26 21:23:33 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:33.343706    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:17.054867  527485 command_runner.go:124] > May 26 21:23:33 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:33.444105    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:17.054880  527485 command_runner.go:124] > May 26 21:23:33 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:33.545421    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:17.054893  527485 command_runner.go:124] > May 26 21:23:33 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:33.645867    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:17.054906  527485 command_runner.go:124] > May 26 21:23:33 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:33.746343    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:17.054919  527485 command_runner.go:124] > May 26 21:23:33 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:33.846868    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:17.054932  527485 command_runner.go:124] > May 26 21:23:33 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:33.947104    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:17.054946  527485 command_runner.go:124] > May 26 21:23:34 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:34.047842    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:17.054959  527485 command_runner.go:124] > May 26 21:23:34 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:34.148334    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:17.054971  527485 command_runner.go:124] > May 26 21:23:34 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:34.248550    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:17.054984  527485 command_runner.go:124] > May 26 21:23:34 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:34.349232    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:17.054997  527485 command_runner.go:124] > May 26 21:23:34 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:34.449632    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:17.055009  527485 command_runner.go:124] > May 26 21:23:34 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:34.549987    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:17.055024  527485 command_runner.go:124] > May 26 21:23:34 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:34.650314    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:17.055038  527485 command_runner.go:124] > May 26 21:23:34 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:34.751182    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:17.055051  527485 command_runner.go:124] > May 26 21:23:34 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:34.832693    2343 reconciler.go:157] Reconciler: start to sync state
	I0526 21:25:17.055068  527485 command_runner.go:124] > May 26 21:23:34 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:34.841269    2343 nodelease.go:49] failed to get node "multinode-20210526212238-510955" when trying to set owner ref to the node lease: nodes "multinode-20210526212238-510955" not found
	I0526 21:25:17.055082  527485 command_runner.go:124] > May 26 21:23:34 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:34.851652    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:17.055098  527485 command_runner.go:124] > May 26 21:23:34 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:34.952325    2343 kubelet.go:2243] node "multinode-20210526212238-510955" not found
	I0526 21:25:17.055112  527485 command_runner.go:124] > May 26 21:23:35 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:35.015600    2343 kubelet_node_status.go:74] Successfully registered node multinode-20210526212238-510955
	I0526 21:25:17.055129  527485 command_runner.go:124] > May 26 21:23:38 multinode-20210526212238-510955 kubelet[2343]: E0526 21:23:38.003372    2343 kubelet.go:2163] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	I0526 21:25:17.055146  527485 command_runner.go:124] > May 26 21:23:38 multinode-20210526212238-510955 kubelet[2343]: I0526 21:23:38.252332    2343 dynamic_cafile_content.go:182] Shutting down client-ca-bundle::/var/lib/minikube/certs/ca.crt
	I0526 21:25:17.055160  527485 command_runner.go:124] > May 26 21:23:38 multinode-20210526212238-510955 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	I0526 21:25:17.055169  527485 command_runner.go:124] > May 26 21:23:38 multinode-20210526212238-510955 systemd[1]: kubelet.service: Succeeded.
	I0526 21:25:17.055180  527485 command_runner.go:124] > May 26 21:23:38 multinode-20210526212238-510955 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0526 21:25:17.055191  527485 command_runner.go:124] > May 26 21:23:38 multinode-20210526212238-510955 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0526 21:25:17.055210  527485 command_runner.go:124] > May 26 21:23:38 multinode-20210526212238-510955 kubelet[2767]: Flag --runtime-request-timeout has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	I0526 21:25:17.055230  527485 command_runner.go:124] > May 26 21:23:38 multinode-20210526212238-510955 kubelet[2767]: Flag --runtime-request-timeout has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	I0526 21:25:17.055242  527485 command_runner.go:124] > May 26 21:23:38 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:38.567074    2767 server.go:416] Version: v1.20.2
	I0526 21:25:17.055257  527485 command_runner.go:124] > May 26 21:23:38 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:38.569090    2767 server.go:837] Client rotation is on, will bootstrap in background
	I0526 21:25:17.055274  527485 command_runner.go:124] > May 26 21:23:38 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:38.580189    2767 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	I0526 21:25:17.055289  527485 command_runner.go:124] > May 26 21:23:38 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:38.581836    2767 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	I0526 21:25:17.055303  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.594567    2767 server.go:645] --cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /
	I0526 21:25:17.055318  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.596007    2767 container_manager_linux.go:274] container manager verified user specified cgroup-root exists: []
	I0526 21:25:17.055360  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.596173    2767 container_manager_linux.go:279] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:remote CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalTopologyManagerScope:container ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none}
	I0526 21:25:17.055375  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.596418    2767 topology_manager.go:120] [topologymanager] Creating topology manager with none policy per container scope
	I0526 21:25:17.055391  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.596689    2767 container_manager_linux.go:310] [topologymanager] Initializing Topology Manager with none policy and container-level scope
	I0526 21:25:17.055405  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.596801    2767 container_manager_linux.go:315] Creating device plugin manager: true
	I0526 21:25:17.055419  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.597107    2767 remote_runtime.go:62] parsed scheme: ""
	I0526 21:25:17.055431  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.597233    2767 remote_runtime.go:62] scheme "" not registered, fallback to default scheme
	I0526 21:25:17.055447  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.597387    2767 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}
	I0526 21:25:17.055459  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.597579    2767 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0526 21:25:17.055473  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.597846    2767 remote_image.go:50] parsed scheme: ""
	I0526 21:25:17.055487  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.597965    2767 remote_image.go:50] scheme "" not registered, fallback to default scheme
	I0526 21:25:17.055504  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.598781    2767 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}
	I0526 21:25:17.055518  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.598958    2767 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0526 21:25:17.055529  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.599605    2767 kubelet.go:262] Adding pod path: /etc/kubernetes/manifests
	I0526 21:25:17.055541  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.599963    2767 kubelet.go:273] Watching apiserver
	I0526 21:25:17.055555  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.629159    2767 kuberuntime_manager.go:216] Container runtime containerd initialized, version: v1.4.4, apiVersion: v1alpha2
	I0526 21:25:17.055572  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: E0526 21:23:43.914429    2767 aws_credentials.go:77] while getting AWS credentials NoCredentialProviders: no valid providers in chain. Deprecated.
	I0526 21:25:17.055586  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]:         For verbose messaging see aws.Config.CredentialsChainVerboseErrors
	I0526 21:25:17.055598  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.918059    2767 server.go:1176] Started kubelet
	I0526 21:25:17.055610  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.928363    2767 server.go:148] Starting to listen on 0.0.0.0:10250
	I0526 21:25:17.055620  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.931699    2767 server.go:410] Adding debug handlers to kubelet server.
	I0526 21:25:17.055633  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.943931    2767 fs_resource_analyzer.go:64] Starting FS ResourceAnalyzer
	I0526 21:25:17.055645  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.945256    2767 volume_manager.go:271] Starting Kubelet Volume Manager
	I0526 21:25:17.055663  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: E0526 21:23:43.949736    2767 kubelet.go:2163] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	I0526 21:25:17.055675  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.949953    2767 client.go:86] parsed scheme: "unix"
	I0526 21:25:17.055688  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.950079    2767 client.go:86] scheme "unix" not registered, fallback to default scheme
	I0526 21:25:17.055704  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.950244    2767 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}
	I0526 21:25:17.055718  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.950360    2767 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0526 21:25:17.055730  527485 command_runner.go:124] > May 26 21:23:43 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:43.960536    2767 desired_state_of_world_populator.go:142] Desired state populator starts to run
	I0526 21:25:17.055744  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.047200    2767 kubelet_node_status.go:71] Attempting to register node multinode-20210526212238-510955
	I0526 21:25:17.055758  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.063101    2767 kubelet_node_status.go:109] Node multinode-20210526212238-510955 was previously registered
	I0526 21:25:17.055776  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.063585    2767 kubelet_node_status.go:74] Successfully registered node multinode-20210526212238-510955
	I0526 21:25:17.055790  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.100820    2767 kubelet_network_linux.go:56] Initialized IPv4 iptables rules.
	I0526 21:25:17.055804  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.100987    2767 status_manager.go:158] Starting to sync pod status with apiserver
	I0526 21:25:17.055818  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.101019    2767 kubelet.go:1802] Starting kubelet main sync loop.
	I0526 21:25:17.055838  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: E0526 21:23:44.101062    2767 kubelet.go:1826] skipping pod synchronization - [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
	I0526 21:25:17.055852  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.167420    2767 cpu_manager.go:193] [cpumanager] starting with none policy
	I0526 21:25:17.055865  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.167823    2767 cpu_manager.go:194] [cpumanager] reconciling every 10s
	I0526 21:25:17.055878  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.167963    2767 state_mem.go:36] [cpumanager] initializing new in-memory state store
	I0526 21:25:17.055890  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.168200    2767 state_mem.go:88] [cpumanager] updated default cpuset: ""
	I0526 21:25:17.055903  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.168317    2767 state_mem.go:96] [cpumanager] updated cpuset assignments: "map[]"
	I0526 21:25:17.055915  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.168438    2767 policy_none.go:43] [cpumanager] none policy: Start
	I0526 21:25:17.055930  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: W0526 21:23:44.170589    2767 manager.go:594] Failed to retrieve checkpoint for "kubelet_internal_checkpoint": checkpoint is not found
	I0526 21:25:17.055942  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.170973    2767 plugin_manager.go:114] Starting Kubelet Plugin Manager
	I0526 21:25:17.055956  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.201167    2767 topology_manager.go:187] [topologymanager] Topology Admit Handler
	I0526 21:25:17.055969  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.201423    2767 topology_manager.go:187] [topologymanager] Topology Admit Handler
	I0526 21:25:17.055982  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.202839    2767 topology_manager.go:187] [topologymanager] Topology Admit Handler
	I0526 21:25:17.055995  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.202968    2767 topology_manager.go:187] [topologymanager] Topology Admit Handler
	I0526 21:25:17.056017  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.349811    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/474c55dfb64741cc485e46b6bb9f2dc0-kubeconfig") pod "kube-controller-manager-multinode-20210526212238-510955" (UID: "474c55dfb64741cc485e46b6bb9f2dc0")
	I0526 21:25:17.056046  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.349855    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/474c55dfb64741cc485e46b6bb9f2dc0-usr-share-ca-certificates") pod "kube-controller-manager-multinode-20210526212238-510955" (UID: "474c55dfb64741cc485e46b6bb9f2dc0")
	I0526 21:25:17.056070  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.349894    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-certs" (UniqueName: "kubernetes.io/host-path/34530b4d5ce1b17919f3b8976b2d0456-etcd-certs") pod "etcd-multinode-20210526212238-510955" (UID: "34530b4d5ce1b17919f3b8976b2d0456")
	I0526 21:25:17.056093  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.349913    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/b42b6879229f245abab6047de8662a2f-ca-certs") pod "kube-apiserver-multinode-20210526212238-510955" (UID: "b42b6879229f245abab6047de8662a2f")
	I0526 21:25:17.056118  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.349921    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "flexvolume-dir" (UniqueName: "kubernetes.io/host-path/474c55dfb64741cc485e46b6bb9f2dc0-flexvolume-dir") pod "kube-controller-manager-multinode-20210526212238-510955" (UID: "474c55dfb64741cc485e46b6bb9f2dc0")
	I0526 21:25:17.056142  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.349921    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/474c55dfb64741cc485e46b6bb9f2dc0-ca-certs") pod "kube-controller-manager-multinode-20210526212238-510955" (UID: "474c55dfb64741cc485e46b6bb9f2dc0")
	I0526 21:25:17.056166  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.349955    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/474c55dfb64741cc485e46b6bb9f2dc0-k8s-certs") pod "kube-controller-manager-multinode-20210526212238-510955" (UID: "474c55dfb64741cc485e46b6bb9f2dc0")
	I0526 21:25:17.056189  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.349955    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/6b4a0ee8b3d15a1c2e47c15d32e6eb0d-kubeconfig") pod "kube-scheduler-multinode-20210526212238-510955" (UID: "6b4a0ee8b3d15a1c2e47c15d32e6eb0d")
	I0526 21:25:17.056212  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.349988    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-data" (UniqueName: "kubernetes.io/host-path/34530b4d5ce1b17919f3b8976b2d0456-etcd-data") pod "etcd-multinode-20210526212238-510955" (UID: "34530b4d5ce1b17919f3b8976b2d0456")
	I0526 21:25:17.056235  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.350013    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/b42b6879229f245abab6047de8662a2f-k8s-certs") pod "kube-apiserver-multinode-20210526212238-510955" (UID: "b42b6879229f245abab6047de8662a2f")
	I0526 21:25:17.056259  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.350027    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/b42b6879229f245abab6047de8662a2f-usr-share-ca-certificates") pod "kube-apiserver-multinode-20210526212238-510955" (UID: "b42b6879229f245abab6047de8662a2f")
	I0526 21:25:17.056275  527485 command_runner.go:124] > May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.350035    2767 reconciler.go:157] Reconciler: start to sync state
	I0526 21:25:17.056293  527485 command_runner.go:124] > May 26 21:23:49 multinode-20210526212238-510955 kubelet[2767]: E0526 21:23:49.171719    2767 kubelet.go:2163] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	I0526 21:25:17.056308  527485 command_runner.go:124] > May 26 21:23:53 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:53.286184    2767 kuberuntime_manager.go:1006] updating runtime config through cri with podcidr 10.244.0.0/24
	I0526 21:25:17.056322  527485 command_runner.go:124] > May 26 21:23:53 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:53.292064    2767 kubelet_network.go:77] Setting Pod CIDR:  -> 10.244.0.0/24
	I0526 21:25:17.056340  527485 command_runner.go:124] > May 26 21:23:53 multinode-20210526212238-510955 kubelet[2767]: E0526 21:23:53.297677    2767 kubelet.go:2163] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	I0526 21:25:17.056355  527485 command_runner.go:124] > May 26 21:23:53 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:53.473000    2767 topology_manager.go:187] [topologymanager] Topology Admit Handler
	I0526 21:25:17.056378  527485 command_runner.go:124] > May 26 21:23:53 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:53.588715    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "cni-cfg" (UniqueName: "kubernetes.io/host-path/aac3ff91-8f9c-4f4e-81fc-a859f780d67d-cni-cfg") pod "kindnet-2wgbs" (UID: "aac3ff91-8f9c-4f4e-81fc-a859f780d67d")
	I0526 21:25:17.056402  527485 command_runner.go:124] > May 26 21:23:53 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:53.589055    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/aac3ff91-8f9c-4f4e-81fc-a859f780d67d-xtables-lock") pod "kindnet-2wgbs" (UID: "aac3ff91-8f9c-4f4e-81fc-a859f780d67d")
	I0526 21:25:17.056428  527485 command_runner.go:124] > May 26 21:23:53 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:53.589618    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kindnet-token-zm2kt" (UniqueName: "kubernetes.io/secret/aac3ff91-8f9c-4f4e-81fc-a859f780d67d-kindnet-token-zm2kt") pod "kindnet-2wgbs" (UID: "aac3ff91-8f9c-4f4e-81fc-a859f780d67d")
	I0526 21:25:17.056449  527485 command_runner.go:124] > May 26 21:23:53 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:53.589842    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/aac3ff91-8f9c-4f4e-81fc-a859f780d67d-lib-modules") pod "kindnet-2wgbs" (UID: "aac3ff91-8f9c-4f4e-81fc-a859f780d67d")
	I0526 21:25:17.056464  527485 command_runner.go:124] > May 26 21:23:53 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:53.611915    2767 topology_manager.go:187] [topologymanager] Topology Admit Handler
	I0526 21:25:17.056486  527485 command_runner.go:124] > May 26 21:23:53 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:53.791552    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/950a915d-c5f0-4e6f-bc12-ee97013032f0-kube-proxy") pod "kube-proxy-qbl42" (UID: "950a915d-c5f0-4e6f-bc12-ee97013032f0")
	I0526 21:25:17.056511  527485 command_runner.go:124] > May 26 21:23:53 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:53.791755    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/950a915d-c5f0-4e6f-bc12-ee97013032f0-lib-modules") pod "kube-proxy-qbl42" (UID: "950a915d-c5f0-4e6f-bc12-ee97013032f0")
	I0526 21:25:17.056534  527485 command_runner.go:124] > May 26 21:23:53 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:53.791904    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy-token-xd4p4" (UniqueName: "kubernetes.io/secret/950a915d-c5f0-4e6f-bc12-ee97013032f0-kube-proxy-token-xd4p4") pod "kube-proxy-qbl42" (UID: "950a915d-c5f0-4e6f-bc12-ee97013032f0")
	I0526 21:25:17.056556  527485 command_runner.go:124] > May 26 21:23:53 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:53.792035    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/950a915d-c5f0-4e6f-bc12-ee97013032f0-xtables-lock") pod "kube-proxy-qbl42" (UID: "950a915d-c5f0-4e6f-bc12-ee97013032f0")
	I0526 21:25:17.056577  527485 command_runner.go:124] > May 26 21:23:54 multinode-20210526212238-510955 kubelet[2767]: E0526 21:23:54.172944    2767 kubelet.go:2163] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	I0526 21:25:17.056600  527485 command_runner.go:124] > May 26 21:23:56 multinode-20210526212238-510955 kubelet[2767]: E0526 21:23:56.623072    2767 cadvisor_stats_provider.go:401] Partial failure issuing cadvisor.ContainerInfoV2: partial failures: ["/kubepods/besteffort/pod950a915d-c5f0-4e6f-bc12-ee97013032f0/de6efc6fec4b20903efb849817a9e57e284841a9fc6ef9bd1b83e563316cdaa2": RecentStats: unable to find data in memory cache]
	I0526 21:25:17.056613  527485 command_runner.go:124] > May 26 21:24:08 multinode-20210526212238-510955 kubelet[2767]: I0526 21:24:08.993599    2767 topology_manager.go:187] [topologymanager] Topology Admit Handler
	I0526 21:25:17.056627  527485 command_runner.go:124] > May 26 21:24:09 multinode-20210526212238-510955 kubelet[2767]: I0526 21:24:09.010021    2767 topology_manager.go:187] [topologymanager] Topology Admit Handler
	I0526 21:25:17.056648  527485 command_runner.go:124] > May 26 21:24:09 multinode-20210526212238-510955 kubelet[2767]: I0526 21:24:09.159693    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp" (UniqueName: "kubernetes.io/host-path/e29f2487-1ac5-4bb4-9ab8-6b5d5c43ec36-tmp") pod "storage-provisioner" (UID: "e29f2487-1ac5-4bb4-9ab8-6b5d5c43ec36")
	I0526 21:25:17.056671  527485 command_runner.go:124] > May 26 21:24:09 multinode-20210526212238-510955 kubelet[2767]: I0526 21:24:09.159808    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-7ps8h" (UniqueName: "kubernetes.io/secret/a0522c32-9960-4c21-8a5a-d0b137009166-coredns-token-7ps8h") pod "coredns-74ff55c5b-tw67b" (UID: "a0522c32-9960-4c21-8a5a-d0b137009166")
	I0526 21:25:17.056693  527485 command_runner.go:124] > May 26 21:24:09 multinode-20210526212238-510955 kubelet[2767]: I0526 21:24:09.159830    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/a0522c32-9960-4c21-8a5a-d0b137009166-config-volume") pod "coredns-74ff55c5b-tw67b" (UID: "a0522c32-9960-4c21-8a5a-d0b137009166")
	I0526 21:25:17.056716  527485 command_runner.go:124] > May 26 21:24:09 multinode-20210526212238-510955 kubelet[2767]: I0526 21:24:09.159848    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "storage-provisioner-token-hgxxq" (UniqueName: "kubernetes.io/secret/e29f2487-1ac5-4bb4-9ab8-6b5d5c43ec36-storage-provisioner-token-hgxxq") pod "storage-provisioner" (UID: "e29f2487-1ac5-4bb4-9ab8-6b5d5c43ec36")
	I0526 21:25:17.085352  527485 logs.go:123] Gathering logs for dmesg ...
	I0526 21:25:17.085369  527485 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0526 21:25:17.094718  527485 command_runner.go:124] > [May26 21:22] You have booted with nomodeset. This means your GPU drivers are DISABLED
	I0526 21:25:17.094745  527485 command_runner.go:124] > [  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	I0526 21:25:17.094759  527485 command_runner.go:124] > [  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	I0526 21:25:17.094781  527485 command_runner.go:124] > [  +0.092301] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	I0526 21:25:17.094792  527485 command_runner.go:124] > [  +3.726361] Unstable clock detected, switching default tracing clock to "global"
	I0526 21:25:17.094798  527485 command_runner.go:124] >               If you want to keep using the local clock, then add:
	I0526 21:25:17.094804  527485 command_runner.go:124] >                 "trace_clock=local"
	I0526 21:25:17.094809  527485 command_runner.go:124] >               on the kernel command line
	I0526 21:25:17.094818  527485 command_runner.go:124] > [  +0.000018] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	I0526 21:25:17.094831  527485 command_runner.go:124] > [  +3.393840] systemd-fstab-generator[1161]: Ignoring "noauto" for root device
	I0526 21:25:17.094850  527485 command_runner.go:124] > [  +0.034647] systemd[1]: system-getty.slice: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	I0526 21:25:17.094870  527485 command_runner.go:124] > [  +0.000003] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	I0526 21:25:17.094888  527485 command_runner.go:124] > [  +0.775022] SELinux: unrecognized netlink message: protocol=0 nlmsg_type=106 sclass=netlink_route_socket pid=1723 comm=systemd-network
	I0526 21:25:17.094897  527485 command_runner.go:124] > [  +1.684954] vboxguest: loading out-of-tree module taints kernel.
	I0526 21:25:17.094904  527485 command_runner.go:124] > [  +0.006011] vboxguest: PCI device not found, probably running on physical hardware.
	I0526 21:25:17.094921  527485 command_runner.go:124] > [  +1.532510] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack.
	I0526 21:25:17.094935  527485 command_runner.go:124] > [May26 21:23] systemd-fstab-generator[2097]: Ignoring "noauto" for root device
	I0526 21:25:17.094948  527485 command_runner.go:124] > [  +0.282151] systemd-fstab-generator[2145]: Ignoring "noauto" for root device
	I0526 21:25:17.094961  527485 command_runner.go:124] > [  +9.202259] systemd-fstab-generator[2335]: Ignoring "noauto" for root device
	I0526 21:25:17.094975  527485 command_runner.go:124] > [ +16.373129] systemd-fstab-generator[2754]: Ignoring "noauto" for root device
	I0526 21:25:17.094986  527485 command_runner.go:124] > [ +16.598445] kauditd_printk_skb: 38 callbacks suppressed
	I0526 21:25:17.094994  527485 command_runner.go:124] > [May26 21:24] kauditd_printk_skb: 50 callbacks suppressed
	I0526 21:25:17.095006  527485 command_runner.go:124] > [ +45.152218] NFSD: Unable to end grace period: -110
	I0526 21:25:17.096269  527485 logs.go:123] Gathering logs for describe nodes ...
	I0526 21:25:17.096283  527485 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0526 21:25:17.234063  527485 command_runner.go:124] > Name:               multinode-20210526212238-510955
	I0526 21:25:17.234090  527485 command_runner.go:124] > Roles:              control-plane,master
	I0526 21:25:17.234100  527485 command_runner.go:124] > Labels:             beta.kubernetes.io/arch=amd64
	I0526 21:25:17.234108  527485 command_runner.go:124] >                     beta.kubernetes.io/os=linux
	I0526 21:25:17.234116  527485 command_runner.go:124] >                     kubernetes.io/arch=amd64
	I0526 21:25:17.234126  527485 command_runner.go:124] >                     kubernetes.io/hostname=multinode-20210526212238-510955
	I0526 21:25:17.234139  527485 command_runner.go:124] >                     kubernetes.io/os=linux
	I0526 21:25:17.234146  527485 command_runner.go:124] >                     minikube.k8s.io/commit=1440f8d7119ca73787e7dc88324b0d13449454ff
	I0526 21:25:17.234153  527485 command_runner.go:124] >                     minikube.k8s.io/name=multinode-20210526212238-510955
	I0526 21:25:17.234163  527485 command_runner.go:124] >                     minikube.k8s.io/updated_at=2021_05_26T21_23_38_0700
	I0526 21:25:17.234169  527485 command_runner.go:124] >                     minikube.k8s.io/version=v1.20.0
	I0526 21:25:17.234176  527485 command_runner.go:124] >                     node-role.kubernetes.io/control-plane=
	I0526 21:25:17.234182  527485 command_runner.go:124] >                     node-role.kubernetes.io/master=
	I0526 21:25:17.234190  527485 command_runner.go:124] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	I0526 21:25:17.234196  527485 command_runner.go:124] >                     node.alpha.kubernetes.io/ttl: 0
	I0526 21:25:17.234203  527485 command_runner.go:124] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0526 21:25:17.234210  527485 command_runner.go:124] > CreationTimestamp:  Wed, 26 May 2021 21:23:34 +0000
	I0526 21:25:17.234216  527485 command_runner.go:124] > Taints:             <none>
	I0526 21:25:17.234223  527485 command_runner.go:124] > Unschedulable:      false
	I0526 21:25:17.234226  527485 command_runner.go:124] > Lease:
	I0526 21:25:17.234231  527485 command_runner.go:124] >   HolderIdentity:  multinode-20210526212238-510955
	I0526 21:25:17.234237  527485 command_runner.go:124] >   AcquireTime:     <unset>
	I0526 21:25:17.234243  527485 command_runner.go:124] >   RenewTime:       Wed, 26 May 2021 21:25:14 +0000
	I0526 21:25:17.234250  527485 command_runner.go:124] > Conditions:
	I0526 21:25:17.234260  527485 command_runner.go:124] >   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	I0526 21:25:17.234272  527485 command_runner.go:124] >   ----             ------  -----------------                 ------------------                ------                       -------
	I0526 21:25:17.234286  527485 command_runner.go:124] >   MemoryPressure   False   Wed, 26 May 2021 21:24:14 +0000   Wed, 26 May 2021 21:23:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	I0526 21:25:17.234308  527485 command_runner.go:124] >   DiskPressure     False   Wed, 26 May 2021 21:24:14 +0000   Wed, 26 May 2021 21:23:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	I0526 21:25:17.234330  527485 command_runner.go:124] >   PIDPressure      False   Wed, 26 May 2021 21:24:14 +0000   Wed, 26 May 2021 21:23:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	I0526 21:25:17.234359  527485 command_runner.go:124] >   Ready            True    Wed, 26 May 2021 21:24:14 +0000   Wed, 26 May 2021 21:24:04 +0000   KubeletReady                 kubelet is posting ready status
	I0526 21:25:17.234370  527485 command_runner.go:124] > Addresses:
	I0526 21:25:17.234374  527485 command_runner.go:124] >   InternalIP:  192.168.39.229
	I0526 21:25:17.234379  527485 command_runner.go:124] >   Hostname:    multinode-20210526212238-510955
	I0526 21:25:17.234382  527485 command_runner.go:124] > Capacity:
	I0526 21:25:17.234387  527485 command_runner.go:124] >   cpu:                2
	I0526 21:25:17.234392  527485 command_runner.go:124] >   ephemeral-storage:  17784752Ki
	I0526 21:25:17.234398  527485 command_runner.go:124] >   hugepages-2Mi:      0
	I0526 21:25:17.234403  527485 command_runner.go:124] >   memory:             2186320Ki
	I0526 21:25:17.234411  527485 command_runner.go:124] >   pods:               110
	I0526 21:25:17.234418  527485 command_runner.go:124] > Allocatable:
	I0526 21:25:17.234424  527485 command_runner.go:124] >   cpu:                2
	I0526 21:25:17.234432  527485 command_runner.go:124] >   ephemeral-storage:  17784752Ki
	I0526 21:25:17.234439  527485 command_runner.go:124] >   hugepages-2Mi:      0
	I0526 21:25:17.234448  527485 command_runner.go:124] >   memory:             2186320Ki
	I0526 21:25:17.234453  527485 command_runner.go:124] >   pods:               110
	I0526 21:25:17.234459  527485 command_runner.go:124] > System Info:
	I0526 21:25:17.234464  527485 command_runner.go:124] >   Machine ID:                 fbd77f9e2b0d4ce7860fb21881bb7ff3
	I0526 21:25:17.234470  527485 command_runner.go:124] >   System UUID:                fbd77f9e-2b0d-4ce7-860f-b21881bb7ff3
	I0526 21:25:17.234477  527485 command_runner.go:124] >   Boot ID:                    9a60591c-de07-4474-bb32-101b0a9643ff
	I0526 21:25:17.234482  527485 command_runner.go:124] >   Kernel Version:             4.19.182
	I0526 21:25:17.234488  527485 command_runner.go:124] >   OS Image:                   Buildroot 2020.02.12
	I0526 21:25:17.234494  527485 command_runner.go:124] >   Operating System:           linux
	I0526 21:25:17.234502  527485 command_runner.go:124] >   Architecture:               amd64
	I0526 21:25:17.234512  527485 command_runner.go:124] >   Container Runtime Version:  containerd://1.4.4
	I0526 21:25:17.234520  527485 command_runner.go:124] >   Kubelet Version:            v1.20.2
	I0526 21:25:17.234531  527485 command_runner.go:124] >   Kube-Proxy Version:         v1.20.2
	I0526 21:25:17.234539  527485 command_runner.go:124] > PodCIDR:                      10.244.0.0/24
	I0526 21:25:17.234549  527485 command_runner.go:124] > PodCIDRs:                     10.244.0.0/24
	I0526 21:25:17.234558  527485 command_runner.go:124] > Non-terminated Pods:          (8 in total)
	I0526 21:25:17.234572  527485 command_runner.go:124] >   Namespace                   Name                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	I0526 21:25:17.234583  527485 command_runner.go:124] >   ---------                   ----                                                       ------------  ----------  ---------------  -------------  ---
	I0526 21:25:17.234601  527485 command_runner.go:124] >   kube-system                 coredns-74ff55c5b-tw67b                                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (7%!)(MISSING)     84s
	I0526 21:25:17.234619  527485 command_runner.go:124] >   kube-system                 etcd-multinode-20210526212238-510955                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         93s
	I0526 21:25:17.234636  527485 command_runner.go:124] >   kube-system                 kindnet-2wgbs                                              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      84s
	I0526 21:25:17.234654  527485 command_runner.go:124] >   kube-system                 kube-apiserver-multinode-20210526212238-510955             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         93s
	I0526 21:25:17.234690  527485 command_runner.go:124] >   kube-system                 kube-controller-manager-multinode-20210526212238-510955    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         93s
	I0526 21:25:17.234708  527485 command_runner.go:124] >   kube-system                 kube-proxy-qbl42                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         84s
	I0526 21:25:17.234726  527485 command_runner.go:124] >   kube-system                 kube-scheduler-multinode-20210526212238-510955             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         93s
	I0526 21:25:17.234747  527485 command_runner.go:124] >   kube-system                 storage-provisioner                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         82s
	I0526 21:25:17.234757  527485 command_runner.go:124] > Allocated resources:
	I0526 21:25:17.234769  527485 command_runner.go:124] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0526 21:25:17.234780  527485 command_runner.go:124] >   Resource           Requests     Limits
	I0526 21:25:17.234788  527485 command_runner.go:124] >   --------           --------     ------
	I0526 21:25:17.234796  527485 command_runner.go:124] >   cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	I0526 21:25:17.234807  527485 command_runner.go:124] >   memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	I0526 21:25:17.234815  527485 command_runner.go:124] >   ephemeral-storage  100Mi (0%!)(MISSING)   0 (0%!)(MISSING)
	I0526 21:25:17.234825  527485 command_runner.go:124] >   hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	I0526 21:25:17.234830  527485 command_runner.go:124] > Events:
	I0526 21:25:17.234837  527485 command_runner.go:124] >   Type    Reason                   Age                  From        Message
	I0526 21:25:17.234845  527485 command_runner.go:124] >   ----    ------                   ----                 ----        -------
	I0526 21:25:17.234852  527485 command_runner.go:124] >   Normal  Starting                 110s                 kubelet     Starting kubelet.
	I0526 21:25:17.234862  527485 command_runner.go:124] >   Normal  NodeHasSufficientMemory  109s (x4 over 110s)  kubelet     Node multinode-20210526212238-510955 status is now: NodeHasSufficientMemory
	I0526 21:25:17.234873  527485 command_runner.go:124] >   Normal  NodeHasNoDiskPressure    109s (x3 over 110s)  kubelet     Node multinode-20210526212238-510955 status is now: NodeHasNoDiskPressure
	I0526 21:25:17.234884  527485 command_runner.go:124] >   Normal  NodeHasSufficientPID     109s (x3 over 110s)  kubelet     Node multinode-20210526212238-510955 status is now: NodeHasSufficientPID
	I0526 21:25:17.234894  527485 command_runner.go:124] >   Normal  NodeAllocatableEnforced  109s                 kubelet     Updated Node Allocatable limit across pods
	I0526 21:25:17.234901  527485 command_runner.go:124] >   Normal  Starting                 94s                  kubelet     Starting kubelet.
	I0526 21:25:17.234911  527485 command_runner.go:124] >   Normal  NodeHasSufficientMemory  93s                  kubelet     Node multinode-20210526212238-510955 status is now: NodeHasSufficientMemory
	I0526 21:25:17.234923  527485 command_runner.go:124] >   Normal  NodeHasNoDiskPressure    93s                  kubelet     Node multinode-20210526212238-510955 status is now: NodeHasNoDiskPressure
	I0526 21:25:17.234940  527485 command_runner.go:124] >   Normal  NodeHasSufficientPID     93s                  kubelet     Node multinode-20210526212238-510955 status is now: NodeHasSufficientPID
	I0526 21:25:17.234956  527485 command_runner.go:124] >   Normal  NodeAllocatableEnforced  93s                  kubelet     Updated Node Allocatable limit across pods
	I0526 21:25:17.234968  527485 command_runner.go:124] >   Normal  Starting                 83s                  kube-proxy  Starting kube-proxy.
	I0526 21:25:17.234984  527485 command_runner.go:124] >   Normal  NodeReady                73s                  kubelet     Node multinode-20210526212238-510955 status is now: NodeReady
	I0526 21:25:17.238081  527485 logs.go:123] Gathering logs for etcd [c8538106e966bebc2d596c1688fdb11c413a37e122bd18c454b4db5bec7f55ad] ...
	I0526 21:25:17.238102  527485 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 c8538106e966bebc2d596c1688fdb11c413a37e122bd18c454b4db5bec7f55ad"
	I0526 21:25:17.260189  527485 command_runner.go:124] ! [WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead
	I0526 21:25:17.260243  527485 command_runner.go:124] ! 2021-05-26 21:23:30.145280 I | etcdmain: etcd Version: 3.4.13
	I0526 21:25:17.260535  527485 command_runner.go:124] ! 2021-05-26 21:23:30.146007 I | etcdmain: Git SHA: ae9734ed2
	I0526 21:25:17.260584  527485 command_runner.go:124] ! 2021-05-26 21:23:30.146359 I | etcdmain: Go Version: go1.12.17
	I0526 21:25:17.260944  527485 command_runner.go:124] ! 2021-05-26 21:23:30.146935 I | etcdmain: Go OS/Arch: linux/amd64
	I0526 21:25:17.261134  527485 command_runner.go:124] ! 2021-05-26 21:23:30.147549 I | etcdmain: setting maximum number of CPUs to 2, total number of available CPUs is 2
	I0526 21:25:17.261201  527485 command_runner.go:124] ! [WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead
	I0526 21:25:17.261504  527485 command_runner.go:124] ! 2021-05-26 21:23:30.148927 I | embed: peerTLS: cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	I0526 21:25:17.261560  527485 command_runner.go:124] ! 2021-05-26 21:23:30.159191 I | embed: name = multinode-20210526212238-510955
	I0526 21:25:17.261864  527485 command_runner.go:124] ! 2021-05-26 21:23:30.159781 I | embed: data dir = /var/lib/minikube/etcd
	I0526 21:25:17.261951  527485 command_runner.go:124] ! 2021-05-26 21:23:30.161368 I | embed: member dir = /var/lib/minikube/etcd/member
	I0526 21:25:17.262261  527485 command_runner.go:124] ! 2021-05-26 21:23:30.161781 I | embed: heartbeat = 100ms
	I0526 21:25:17.262280  527485 command_runner.go:124] ! 2021-05-26 21:23:30.162024 I | embed: election = 1000ms
	I0526 21:25:17.262419  527485 command_runner.go:124] ! 2021-05-26 21:23:30.162419 I | embed: snapshot count = 10000
	I0526 21:25:17.262485  527485 command_runner.go:124] ! 2021-05-26 21:23:30.162834 I | embed: advertise client URLs = https://192.168.39.229:2379
	I0526 21:25:17.262700  527485 command_runner.go:124] ! 2021-05-26 21:23:30.186657 I | etcdserver: starting member b8647f2870156d71 in cluster 2bfbf13ce68722b
	I0526 21:25:17.262979  527485 command_runner.go:124] ! raft2021/05/26 21:23:30 INFO: b8647f2870156d71 switched to configuration voters=()
	I0526 21:25:17.263063  527485 command_runner.go:124] ! raft2021/05/26 21:23:30 INFO: b8647f2870156d71 became follower at term 0
	I0526 21:25:17.263227  527485 command_runner.go:124] ! raft2021/05/26 21:23:30 INFO: newRaft b8647f2870156d71 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	I0526 21:25:17.263377  527485 command_runner.go:124] ! raft2021/05/26 21:23:30 INFO: b8647f2870156d71 became follower at term 1
	I0526 21:25:17.263437  527485 command_runner.go:124] ! raft2021/05/26 21:23:30 INFO: b8647f2870156d71 switched to configuration voters=(13286884612305677681)
	I0526 21:25:17.263679  527485 command_runner.go:124] ! 2021-05-26 21:23:30.205555 W | auth: simple token is not cryptographically signed
	I0526 21:25:17.263960  527485 command_runner.go:124] ! 2021-05-26 21:23:30.234208 I | etcdserver: starting server... [version: 3.4.13, cluster version: to_be_decided]
	I0526 21:25:17.264088  527485 command_runner.go:124] ! 2021-05-26 21:23:30.243414 I | etcdserver: b8647f2870156d71 as single-node; fast-forwarding 9 ticks (election ticks 10)
	I0526 21:25:17.264181  527485 command_runner.go:124] ! raft2021/05/26 21:23:30 INFO: b8647f2870156d71 switched to configuration voters=(13286884612305677681)
	I0526 21:25:17.264437  527485 command_runner.go:124] ! 2021-05-26 21:23:30.255082 I | etcdserver/membership: added member b8647f2870156d71 [https://192.168.39.229:2380] to cluster 2bfbf13ce68722b
	I0526 21:25:17.264505  527485 command_runner.go:124] ! 2021-05-26 21:23:30.261097 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	I0526 21:25:17.264722  527485 command_runner.go:124] ! 2021-05-26 21:23:30.264526 I | embed: listening for peers on 192.168.39.229:2380
	I0526 21:25:17.264938  527485 command_runner.go:124] ! 2021-05-26 21:23:30.264701 I | embed: listening for metrics on http://127.0.0.1:2381
	I0526 21:25:17.265081  527485 command_runner.go:124] ! raft2021/05/26 21:23:30 INFO: b8647f2870156d71 is starting a new election at term 1
	I0526 21:25:17.265422  527485 command_runner.go:124] ! raft2021/05/26 21:23:30 INFO: b8647f2870156d71 became candidate at term 2
	I0526 21:25:17.265514  527485 command_runner.go:124] ! raft2021/05/26 21:23:30 INFO: b8647f2870156d71 received MsgVoteResp from b8647f2870156d71 at term 2
	I0526 21:25:17.265592  527485 command_runner.go:124] ! raft2021/05/26 21:23:30 INFO: b8647f2870156d71 became leader at term 2
	I0526 21:25:17.265838  527485 command_runner.go:124] ! raft2021/05/26 21:23:30 INFO: raft.node: b8647f2870156d71 elected leader b8647f2870156d71 at term 2
	I0526 21:25:17.265897  527485 command_runner.go:124] ! 2021-05-26 21:23:30.893688 I | etcdserver: setting up the initial cluster version to 3.4
	I0526 21:25:17.266124  527485 command_runner.go:124] ! 2021-05-26 21:23:30.897562 I | embed: ready to serve client requests
	I0526 21:25:17.266399  527485 command_runner.go:124] ! 2021-05-26 21:23:30.897893 I | etcdserver: published {Name:multinode-20210526212238-510955 ClientURLs:[https://192.168.39.229:2379]} to cluster 2bfbf13ce68722b
	I0526 21:25:17.266417  527485 command_runner.go:124] ! 2021-05-26 21:23:30.898097 I | embed: ready to serve client requests
	I0526 21:25:17.266428  527485 command_runner.go:124] ! 2021-05-26 21:23:30.904911 I | embed: serving client requests on 127.0.0.1:2379
	I0526 21:25:17.266444  527485 command_runner.go:124] ! 2021-05-26 21:23:30.925406 I | embed: serving client requests on 192.168.39.229:2379
	I0526 21:25:17.266454  527485 command_runner.go:124] ! 2021-05-26 21:23:30.930764 N | etcdserver/membership: set the initial cluster version to 3.4
	I0526 21:25:17.266464  527485 command_runner.go:124] ! 2021-05-26 21:23:30.973015 I | etcdserver/api: enabled capabilities for version 3.4
	I0526 21:25:17.266476  527485 command_runner.go:124] ! 2021-05-26 21:23:35.005110 W | etcdserver: read-only range request "key:\"/registry/ranges/servicenodeports\" " with result "range_response_count:0 size:4" took too long (158.136927ms) to execute
	I0526 21:25:17.266495  527485 command_runner.go:124] ! 2021-05-26 21:23:35.008540 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/etcd-multinode-20210526212238-510955\" " with result "range_response_count:0 size:4" took too long (159.3133ms) to execute
	I0526 21:25:17.266508  527485 command_runner.go:124] ! 2021-05-26 21:23:35.012635 W | etcdserver: read-only range request "key:\"/registry/namespaces/kube-system\" " with result "range_response_count:0 size:4" took too long (107.936302ms) to execute
	I0526 21:25:17.266524  527485 command_runner.go:124] ! 2021-05-26 21:23:35.013064 W | etcdserver: read-only range request "key:\"/registry/csinodes/multinode-20210526212238-510955\" " with result "range_response_count:0 size:4" took too long (148.811077ms) to execute
	I0526 21:25:17.266537  527485 command_runner.go:124] ! 2021-05-26 21:23:35.013577 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:4" took too long (157.477156ms) to execute
	I0526 21:25:17.266546  527485 command_runner.go:124] ! 2021-05-26 21:23:48.034379 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	I0526 21:25:17.266553  527485 command_runner.go:124] ! 2021-05-26 21:23:50.916831 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	I0526 21:25:17.266561  527485 command_runner.go:124] ! 2021-05-26 21:24:00.917857 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	I0526 21:25:17.266568  527485 command_runner.go:124] ! 2021-05-26 21:24:10.918220 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	I0526 21:25:17.266576  527485 command_runner.go:124] ! 2021-05-26 21:24:20.917896 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	I0526 21:25:17.266583  527485 command_runner.go:124] ! 2021-05-26 21:24:30.916918 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	I0526 21:25:17.266590  527485 command_runner.go:124] ! 2021-05-26 21:24:40.917190 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	I0526 21:25:17.266598  527485 command_runner.go:124] ! 2021-05-26 21:24:50.917237 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	I0526 21:25:17.266605  527485 command_runner.go:124] ! 2021-05-26 21:25:00.916673 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	I0526 21:25:17.266613  527485 command_runner.go:124] ! 2021-05-26 21:25:10.921256 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	I0526 21:25:17.270461  527485 logs.go:123] Gathering logs for containerd ...
	I0526 21:25:17.270487  527485 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0526 21:25:17.305712  527485 command_runner.go:124] > -- Logs begin at Wed 2021-05-26 21:22:49 UTC, end at Wed 2021-05-26 21:25:17 UTC. --
	I0526 21:25:17.305741  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 systemd[1]: Starting containerd container runtime...
	I0526 21:25:17.305752  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 systemd[1]: Started containerd container runtime.
	I0526 21:25:17.305779  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.412639957Z" level=info msg="starting containerd" revision=05f951a3781f4f2c1911b05e61c160e9c30eaa8e version=v1.4.4
	I0526 21:25:17.305804  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.454795725Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0526 21:25:17.305820  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.455022736Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0526 21:25:17.305844  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.456819758Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/4.19.182\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0526 21:25:17.305861  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.456940685Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0526 21:25:17.305881  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.457199432Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0526 21:25:17.305898  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.457299817Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0526 21:25:17.305915  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.457342626Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0526 21:25:17.305930  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.457353348Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0526 21:25:17.305946  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.457375564Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0526 21:25:17.305962  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.457518971Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0526 21:25:17.305984  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.457752665Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0526 21:25:17.305999  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.457768067Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0526 21:25:17.306015  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.457801760Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0526 21:25:17.306029  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.457811694Z" level=info msg="metadata content store policy set" policy=shared
	I0526 21:25:17.306048  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.461742670Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0526 21:25:17.306068  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.461851430Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0526 21:25:17.306083  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462036878Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0526 21:25:17.306099  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462069131Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0526 21:25:17.306114  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462082171Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0526 21:25:17.306130  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462094524Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0526 21:25:17.306145  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462115116Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0526 21:25:17.306160  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462127721Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1
	I0526 21:25:17.306176  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462139766Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0526 21:25:17.306195  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462157542Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0526 21:25:17.306213  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462167923Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0526 21:25:17.306228  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462295610Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0526 21:25:17.306244  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462357720Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0526 21:25:17.306260  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462745295Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0526 21:25:17.306276  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462770123Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0526 21:25:17.306291  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462815565Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0526 21:25:17.306307  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462827921Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0526 21:25:17.306323  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462846347Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0526 21:25:17.306338  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462857513Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0526 21:25:17.306352  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462870788Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0526 21:25:17.306369  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462881154Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0526 21:25:17.306386  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462892049Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0526 21:25:17.306402  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462903002Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0526 21:25:17.306417  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462913917Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0526 21:25:17.306432  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462958764Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0526 21:25:17.306447  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462972025Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0526 21:25:17.306461  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462983386Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0526 21:25:17.306475  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.462994704Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1
	I0526 21:25:17.306493  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.463133131Z" level=warning msg="failed to load plugin io.containerd.grpc.v1.cri" error="invalid plugin config: `systemd_cgroup` only works for runtime io.containerd.runtime.v1.linux"
	I0526 21:25:17.306509  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.463145276Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0526 21:25:17.306523  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.463363744Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	I0526 21:25:17.306537  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.463401676Z" level=info msg=serving... address=/run/containerd/containerd.sock
	I0526 21:25:17.306550  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 containerd[2107]: time="2021-05-26T21:23:12.463415404Z" level=info msg="containerd successfully booted in 0.052163s"
	I0526 21:25:17.306560  527485 command_runner.go:124] > May 26 21:23:12 multinode-20210526212238-510955 systemd[1]: Stopping containerd container runtime...
	I0526 21:25:17.306572  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 systemd[1]: containerd.service: Succeeded.
	I0526 21:25:17.306584  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 systemd[1]: Stopped containerd container runtime.
	I0526 21:25:17.306594  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 systemd[1]: Starting containerd container runtime...
	I0526 21:25:17.306604  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 systemd[1]: Started containerd container runtime.
	I0526 21:25:17.306617  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.677351233Z" level=info msg="starting containerd" revision=05f951a3781f4f2c1911b05e61c160e9c30eaa8e version=v1.4.4
	I0526 21:25:17.306632  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.703735354Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0526 21:25:17.306648  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.703939180Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0526 21:25:17.306671  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.706070962Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/4.19.182\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0526 21:25:17.306689  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.706222939Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0526 21:25:17.306712  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.706683734Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0526 21:25:17.306728  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.706837938Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0526 21:25:17.306743  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.706963959Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0526 21:25:17.306763  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.707081760Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0526 21:25:17.306778  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.707216688Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0526 21:25:17.306796  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.707381113Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0526 21:25:17.306821  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.707841019Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0526 21:25:17.306836  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.707973506Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0526 21:25:17.306853  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.708095816Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0526 21:25:17.306868  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.708236930Z" level=info msg="metadata content store policy set" policy=shared
	I0526 21:25:17.306884  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.708536776Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0526 21:25:17.306898  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.708698510Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0526 21:25:17.306916  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.708937323Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0526 21:25:17.306932  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.709074999Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0526 21:25:17.306948  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.709196994Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0526 21:25:17.306963  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.709315424Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0526 21:25:17.306979  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.709506686Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0526 21:25:17.306996  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.709629192Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1
	I0526 21:25:17.307025  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.709743913Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0526 21:25:17.307041  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.709857985Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0526 21:25:17.307056  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.709979410Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0526 21:25:17.307072  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.710125076Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0526 21:25:17.307087  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.710271949Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0526 21:25:17.307103  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.710830775Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0526 21:25:17.307119  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.710974791Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0526 21:25:17.307135  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.711117145Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0526 21:25:17.307150  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.711243334Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0526 21:25:17.307165  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.711363735Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0526 21:25:17.307179  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.711549081Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0526 21:25:17.307194  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.711666234Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0526 21:25:17.307209  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.711781506Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0526 21:25:17.307223  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.711895813Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0526 21:25:17.307248  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.712013139Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0526 21:25:17.307263  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.712131897Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0526 21:25:17.307278  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.712269473Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0526 21:25:17.307293  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.712503525Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0526 21:25:17.307308  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.712659007Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0526 21:25:17.307324  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.712779064Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1
	I0526 21:25:17.307342  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.712986218Z" level=warning msg="`default_runtime` is deprecated, please use `default_runtime_name` to reference the default configuration you have defined in `runtimes`"
	I0526 21:25:17.307439  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.713141331Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:default DefaultRuntime:{Type:io.containerd.runc.v2 Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:0xc000155fb0 PrivilegedWithoutHostDevices:false BaseRuntimeSpec:} UntrustedWorkloadRuntime:{Type: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:<nil> PrivilegedWithoutHostDevices:false BaseRuntimeSpec:} Runtimes:map[default:{Type:io.containerd.runc.v2 Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:0xc000155fb0 PrivilegedWithoutHostDevices:false BaseRuntimeSpec:} runc:{Type:io.containerd.runc.v2 Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:0xc00037b050 PrivilegedWithoutHostDevices:false BaseRuntimeSpec:}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpac
kedLayers:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.mk NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate:} Registry:{Mirrors:map[docker.io:{Endpoints:[https://registry-1.docker.io]}] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:} DisableTCPService:true StreamServerAddress: StreamServerPort:10010 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:k8s.gcr.io/pause:3.2 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true IgnoreImageDefinedVolumes:false} ContainerdRootDir:/mnt/vda1/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/mnt/vda1/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/r
un/containerd/io.containerd.grpc.v1.cri}"
	I0526 21:25:17.307456  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.713322225Z" level=info msg="Connect containerd service"
	I0526 21:25:17.307470  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.713538361Z" level=info msg="Get image filesystem path \"/mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\""
	I0526 21:25:17.307491  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.714213931Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.mk: cni plugin not initialized: failed to load cni config"
	I0526 21:25:17.307507  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.714359921Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0526 21:25:17.307523  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.714868242Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	I0526 21:25:17.307537  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.715023618Z" level=info msg=serving... address=/run/containerd/containerd.sock
	I0526 21:25:17.307550  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.715142631Z" level=info msg="containerd successfully booted in 0.038760s"
	I0526 21:25:17.307563  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.726087774Z" level=info msg="Start subscribing containerd event"
	I0526 21:25:17.307572  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.726733995Z" level=info msg="Start recovering state"
	I0526 21:25:17.307586  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.781395051Z" level=info msg="Start event monitor"
	I0526 21:25:17.307599  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.781771001Z" level=info msg="Start snapshots syncer"
	I0526 21:25:17.307612  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.781893491Z" level=info msg="Start cni network conf syncer"
	I0526 21:25:17.307624  527485 command_runner.go:124] > May 26 21:23:16 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:16.782003464Z" level=info msg="Start streaming server"
	I0526 21:25:17.307641  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:28.484581294Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:kube-controller-manager-multinode-20210526212238-510955,Uid:474c55dfb64741cc485e46b6bb9f2dc0,Namespace:kube-system,Attempt:0,}"
	I0526 21:25:17.307659  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:28.490843770Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:kube-scheduler-multinode-20210526212238-510955,Uid:6b4a0ee8b3d15a1c2e47c15d32e6eb0d,Namespace:kube-system,Attempt:0,}"
	I0526 21:25:17.307679  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:28.501056680Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:kube-apiserver-multinode-20210526212238-510955,Uid:b42b6879229f245abab6047de8662a2f,Namespace:kube-system,Attempt:0,}"
	I0526 21:25:17.307697  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:28.508591647Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:etcd-multinode-20210526212238-510955,Uid:34530b4d5ce1b17919f3b8976b2d0456,Namespace:kube-system,Attempt:0,}"
	I0526 21:25:17.307716  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:28.580716340Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fe43674906f2080850da99c25995a18c2583bfda5a6a21d58f51cb45f673d486 pid=2407
	I0526 21:25:17.307738  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:28.598809833Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/24fd8b8599a6ee5e09c19d4ce15908360ea29418f2bbe3b0ba2d12f73a3519fb pid=2435
	I0526 21:25:17.307762  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:28.602060491Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/73ada73fbbf0b2a7b4a40791347e9a5a366e1f52a347203f20a27bcb2813b6c5 pid=2434
	I0526 21:25:17.307782  527485 command_runner.go:124] > May 26 21:23:28 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:28.602007310Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2ad404c6a9c449ae1ebfab12355673229979a8ee4cf4d87f94b5ca073d31b43e pid=2452
	I0526 21:25:17.307804  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:29.066808539Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-multinode-20210526212238-510955,Uid:b42b6879229f245abab6047de8662a2f,Namespace:kube-system,Attempt:0,} returns sandbox id \"fe43674906f2080850da99c25995a18c2583bfda5a6a21d58f51cb45f673d486\""
	I0526 21:25:17.307824  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:29.074803022Z" level=info msg="CreateContainer within sandbox \"fe43674906f2080850da99c25995a18c2583bfda5a6a21d58f51cb45f673d486\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}"
	I0526 21:25:17.307846  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:29.194718464Z" level=info msg="CreateContainer within sandbox \"fe43674906f2080850da99c25995a18c2583bfda5a6a21d58f51cb45f673d486\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a0581c0e5409bf38a71c61d4cf776a7fcd3dc38c24a5a6a52165552afd2bc85c\""
	I0526 21:25:17.307861  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:29.196219933Z" level=info msg="StartContainer for \"a0581c0e5409bf38a71c61d4cf776a7fcd3dc38c24a5a6a52165552afd2bc85c\""
	I0526 21:25:17.307885  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:29.262678371Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-multinode-20210526212238-510955,Uid:474c55dfb64741cc485e46b6bb9f2dc0,Namespace:kube-system,Attempt:0,} returns sandbox id \"73ada73fbbf0b2a7b4a40791347e9a5a366e1f52a347203f20a27bcb2813b6c5\""
	I0526 21:25:17.307905  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:29.272571919Z" level=info msg="CreateContainer within sandbox \"73ada73fbbf0b2a7b4a40791347e9a5a366e1f52a347203f20a27bcb2813b6c5\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}"
	I0526 21:25:17.307927  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:29.347228547Z" level=info msg="CreateContainer within sandbox \"73ada73fbbf0b2a7b4a40791347e9a5a366e1f52a347203f20a27bcb2813b6c5\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"2314e41b1b44395e7b5a7c32d6859941d4a5db601171f0dd18ddefb571d25f18\""
	I0526 21:25:17.307943  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:29.349365690Z" level=info msg="StartContainer for \"2314e41b1b44395e7b5a7c32d6859941d4a5db601171f0dd18ddefb571d25f18\""
	I0526 21:25:17.307960  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:29.419043703Z" level=info msg="StartContainer for \"a0581c0e5409bf38a71c61d4cf776a7fcd3dc38c24a5a6a52165552afd2bc85c\" returns successfully"
	I0526 21:25:17.307982  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:29.520520792Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-multinode-20210526212238-510955,Uid:6b4a0ee8b3d15a1c2e47c15d32e6eb0d,Namespace:kube-system,Attempt:0,} returns sandbox id \"24fd8b8599a6ee5e09c19d4ce15908360ea29418f2bbe3b0ba2d12f73a3519fb\""
	I0526 21:25:17.308004  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:29.527415671Z" level=info msg="CreateContainer within sandbox \"24fd8b8599a6ee5e09c19d4ce15908360ea29418f2bbe3b0ba2d12f73a3519fb\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}"
	I0526 21:25:17.308026  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:29.566421321Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:etcd-multinode-20210526212238-510955,Uid:34530b4d5ce1b17919f3b8976b2d0456,Namespace:kube-system,Attempt:0,} returns sandbox id \"2ad404c6a9c449ae1ebfab12355673229979a8ee4cf4d87f94b5ca073d31b43e\""
	I0526 21:25:17.308046  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:29.575850717Z" level=info msg="CreateContainer within sandbox \"2ad404c6a9c449ae1ebfab12355673229979a8ee4cf4d87f94b5ca073d31b43e\" for container &ContainerMetadata{Name:etcd,Attempt:0,}"
	I0526 21:25:17.308070  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:29.621335319Z" level=info msg="CreateContainer within sandbox \"24fd8b8599a6ee5e09c19d4ce15908360ea29418f2bbe3b0ba2d12f73a3519fb\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e6bb9bee7539aa8f41eb665706dd077c17b8c28a54090cf4fffe439978194c08\""
	I0526 21:25:17.308086  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:29.623169879Z" level=info msg="StartContainer for \"e6bb9bee7539aa8f41eb665706dd077c17b8c28a54090cf4fffe439978194c08\""
	I0526 21:25:17.308105  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:29.681255114Z" level=info msg="StartContainer for \"2314e41b1b44395e7b5a7c32d6859941d4a5db601171f0dd18ddefb571d25f18\" returns successfully"
	I0526 21:25:17.308127  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:29.683704929Z" level=info msg="CreateContainer within sandbox \"2ad404c6a9c449ae1ebfab12355673229979a8ee4cf4d87f94b5ca073d31b43e\" for &ContainerMetadata{Name:etcd,Attempt:0,} returns container id \"c8538106e966bebc2d596c1688fdb11c413a37e122bd18c454b4db5bec7f55ad\""
	I0526 21:25:17.308143  527485 command_runner.go:124] > May 26 21:23:29 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:29.684577023Z" level=info msg="StartContainer for \"c8538106e966bebc2d596c1688fdb11c413a37e122bd18c454b4db5bec7f55ad\""
	I0526 21:25:17.308160  527485 command_runner.go:124] > May 26 21:23:30 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:30.017920282Z" level=info msg="StartContainer for \"c8538106e966bebc2d596c1688fdb11c413a37e122bd18c454b4db5bec7f55ad\" returns successfully"
	I0526 21:25:17.308177  527485 command_runner.go:124] > May 26 21:23:30 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:30.056525418Z" level=info msg="StartContainer for \"e6bb9bee7539aa8f41eb665706dd077c17b8c28a54090cf4fffe439978194c08\" returns successfully"
	I0526 21:25:17.308215  527485 command_runner.go:124] > May 26 21:23:53 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:53.290788536Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
	I0526 21:25:17.308233  527485 command_runner.go:124] > May 26 21:23:53 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:53.802102062Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:kindnet-2wgbs,Uid:aac3ff91-8f9c-4f4e-81fc-a859f780d67d,Namespace:kube-system,Attempt:0,}"
	I0526 21:25:17.308256  527485 command_runner.go:124] > May 26 21:23:53 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:53.839975209Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/53490c652b9e5b3b552f7ca74d9a84e9c42a1849a932e5f024b22d340d5734e8 pid=2987
	I0526 21:25:17.308274  527485 command_runner.go:124] > May 26 21:23:53 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:53.915628984Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:kube-proxy-qbl42,Uid:950a915d-c5f0-4e6f-bc12-ee97013032f0,Namespace:kube-system,Attempt:0,}"
	I0526 21:25:17.308294  527485 command_runner.go:124] > May 26 21:23:53 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:53.950847165Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/038c42970362d9798abb36c3983856aa352e67a59ca9ce5f3e1852c03634a59a pid=3013
	I0526 21:25:17.308316  527485 command_runner.go:124] > May 26 21:23:54 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:54.116312794Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qbl42,Uid:950a915d-c5f0-4e6f-bc12-ee97013032f0,Namespace:kube-system,Attempt:0,} returns sandbox id \"038c42970362d9798abb36c3983856aa352e67a59ca9ce5f3e1852c03634a59a\""
	I0526 21:25:17.308342  527485 command_runner.go:124] > May 26 21:23:54 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:54.127305490Z" level=info msg="CreateContainer within sandbox \"038c42970362d9798abb36c3983856aa352e67a59ca9ce5f3e1852c03634a59a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}"
	I0526 21:25:17.308364  527485 command_runner.go:124] > May 26 21:23:54 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:54.182202148Z" level=info msg="CreateContainer within sandbox \"038c42970362d9798abb36c3983856aa352e67a59ca9ce5f3e1852c03634a59a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"de6efc6fec4b20903efb849817a9e57e284841a9fc6ef9bd1b83e563316cdaa2\""
	I0526 21:25:17.308380  527485 command_runner.go:124] > May 26 21:23:54 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:54.188910123Z" level=info msg="StartContainer for \"de6efc6fec4b20903efb849817a9e57e284841a9fc6ef9bd1b83e563316cdaa2\""
	I0526 21:25:17.308397  527485 command_runner.go:124] > May 26 21:23:54 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:54.381612238Z" level=info msg="StartContainer for \"de6efc6fec4b20903efb849817a9e57e284841a9fc6ef9bd1b83e563316cdaa2\" returns successfully"
	I0526 21:25:17.308417  527485 command_runner.go:124] > May 26 21:23:55 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:55.674364903Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/library/minikube-local-cache-test:functional-20210526211257-510955,Labels:map[string]string{},XXX_unrecognized:[],}"
	I0526 21:25:17.308437  527485 command_runner.go:124] > May 26 21:23:55 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:55.683119285Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d019ff3187ef5660d1df17b8caf469d5fc50b72267134348e040397c4d49d830,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
	I0526 21:25:17.308459  527485 command_runner.go:124] > May 26 21:23:55 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:55.683711665Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/library/minikube-local-cache-test:functional-20210526211257-510955,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
	I0526 21:25:17.308476  527485 command_runner.go:124] > May 26 21:23:56 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:56.582858367Z" level=error msg="get state for 53490c652b9e5b3b552f7ca74d9a84e9c42a1849a932e5f024b22d340d5734e8" error="context deadline exceeded: unknown"
	I0526 21:25:17.308489  527485 command_runner.go:124] > May 26 21:23:56 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:56.582967226Z" level=warning msg="unknown status" status=0
	I0526 21:25:17.308510  527485 command_runner.go:124] > May 26 21:23:56 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:56.969753374Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kindnet-2wgbs,Uid:aac3ff91-8f9c-4f4e-81fc-a859f780d67d,Namespace:kube-system,Attempt:0,} returns sandbox id \"53490c652b9e5b3b552f7ca74d9a84e9c42a1849a932e5f024b22d340d5734e8\""
	I0526 21:25:17.308531  527485 command_runner.go:124] > May 26 21:23:56 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:56.975070195Z" level=info msg="CreateContainer within sandbox \"53490c652b9e5b3b552f7ca74d9a84e9c42a1849a932e5f024b22d340d5734e8\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:0,}"
	I0526 21:25:17.308553  527485 command_runner.go:124] > May 26 21:23:57 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:57.027887855Z" level=info msg="CreateContainer within sandbox \"53490c652b9e5b3b552f7ca74d9a84e9c42a1849a932e5f024b22d340d5734e8\" for &ContainerMetadata{Name:kindnet-cni,Attempt:0,} returns container id \"69df1859ce4d1a30c4660b7f63cb09e13d69f3813d39620e6ca8dc830b4388bf\""
	I0526 21:25:17.308571  527485 command_runner.go:124] > May 26 21:23:57 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:57.029566085Z" level=info msg="StartContainer for \"69df1859ce4d1a30c4660b7f63cb09e13d69f3813d39620e6ca8dc830b4388bf\""
	I0526 21:25:17.308587  527485 command_runner.go:124] > May 26 21:23:57 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:23:57.574608517Z" level=info msg="StartContainer for \"69df1859ce4d1a30c4660b7f63cb09e13d69f3813d39620e6ca8dc830b4388bf\" returns successfully"
	I0526 21:25:17.308605  527485 command_runner.go:124] > May 26 21:24:09 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:24:09.297649575Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:storage-provisioner,Uid:e29f2487-1ac5-4bb4-9ab8-6b5d5c43ec36,Namespace:kube-system,Attempt:0,}"
	I0526 21:25:17.308623  527485 command_runner.go:124] > May 26 21:24:09 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:24:09.323344186Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:coredns-74ff55c5b-tw67b,Uid:a0522c32-9960-4c21-8a5a-d0b137009166,Namespace:kube-system,Attempt:0,}"
	I0526 21:25:17.308641  527485 command_runner.go:124] > May 26 21:24:09 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:24:09.332120092Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/722b1b257c571a73cb4686c5476aba37030267ae95e826e47362be9c166adb55 pid=3313
	I0526 21:25:17.308660  527485 command_runner.go:124] > May 26 21:24:09 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:24:09.442356819Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1d96eb581f035bbd8a09d1caefefe610196dd7fb21d1b74e5f155bddc0a54900 pid=3376
	I0526 21:25:17.308681  527485 command_runner.go:124] > May 26 21:24:09 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:24:09.792546853Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:storage-provisioner,Uid:e29f2487-1ac5-4bb4-9ab8-6b5d5c43ec36,Namespace:kube-system,Attempt:0,} returns sandbox id \"722b1b257c571a73cb4686c5476aba37030267ae95e826e47362be9c166adb55\""
	I0526 21:25:17.308702  527485 command_runner.go:124] > May 26 21:24:09 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:24:09.796339883Z" level=info msg="CreateContainer within sandbox \"722b1b257c571a73cb4686c5476aba37030267ae95e826e47362be9c166adb55\" for container &ContainerMetadata{Name:storage-provisioner,Attempt:0,}"
	I0526 21:25:17.308724  527485 command_runner.go:124] > May 26 21:24:09 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:24:09.843281999Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-74ff55c5b-tw67b,Uid:a0522c32-9960-4c21-8a5a-d0b137009166,Namespace:kube-system,Attempt:0,} returns sandbox id \"1d96eb581f035bbd8a09d1caefefe610196dd7fb21d1b74e5f155bddc0a54900\""
	I0526 21:25:17.308744  527485 command_runner.go:124] > May 26 21:24:09 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:24:09.849108598Z" level=info msg="CreateContainer within sandbox \"1d96eb581f035bbd8a09d1caefefe610196dd7fb21d1b74e5f155bddc0a54900\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
	I0526 21:25:17.308770  527485 command_runner.go:124] > May 26 21:24:09 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:24:09.875948742Z" level=info msg="CreateContainer within sandbox \"722b1b257c571a73cb4686c5476aba37030267ae95e826e47362be9c166adb55\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"5d3df8c94eaedafbc7bb1e79012cbc72bc2ec88687f4bfc6f10fff8c1eb3107d\""
	I0526 21:25:17.308786  527485 command_runner.go:124] > May 26 21:24:09 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:24:09.879073015Z" level=info msg="StartContainer for \"5d3df8c94eaedafbc7bb1e79012cbc72bc2ec88687f4bfc6f10fff8c1eb3107d\""
	I0526 21:25:17.308807  527485 command_runner.go:124] > May 26 21:24:09 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:24:09.915826719Z" level=info msg="CreateContainer within sandbox \"1d96eb581f035bbd8a09d1caefefe610196dd7fb21d1b74e5f155bddc0a54900\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a9593dff4428d4d0f5cc31832cefb42bf39b62a98d32d295dd9502a1ef6d307a\""
	I0526 21:25:17.308823  527485 command_runner.go:124] > May 26 21:24:09 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:24:09.918179651Z" level=info msg="StartContainer for \"a9593dff4428d4d0f5cc31832cefb42bf39b62a98d32d295dd9502a1ef6d307a\""
	I0526 21:25:17.308839  527485 command_runner.go:124] > May 26 21:24:10 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:24:10.083539707Z" level=info msg="StartContainer for \"5d3df8c94eaedafbc7bb1e79012cbc72bc2ec88687f4bfc6f10fff8c1eb3107d\" returns successfully"
	I0526 21:25:17.308855  527485 command_runner.go:124] > May 26 21:24:10 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:24:10.120722012Z" level=info msg="StartContainer for \"a9593dff4428d4d0f5cc31832cefb42bf39b62a98d32d295dd9502a1ef6d307a\" returns successfully"
	I0526 21:25:17.325604  527485 logs.go:123] Gathering logs for container status ...
	I0526 21:25:17.325623  527485 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0526 21:25:17.345401  527485 command_runner.go:124] > CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	I0526 21:25:17.345429  527485 command_runner.go:124] > a9593dff4428d       bfe3a36ebd252       About a minute ago   Running             coredns                   0                   1d96eb581f035
	I0526 21:25:17.345440  527485 command_runner.go:124] > 5d3df8c94eaed       6e38f40d628db       About a minute ago   Running             storage-provisioner       0                   722b1b257c571
	I0526 21:25:17.345459  527485 command_runner.go:124] > 69df1859ce4d1       6de166512aa22       About a minute ago   Running             kindnet-cni               0                   53490c652b9e5
	I0526 21:25:17.345473  527485 command_runner.go:124] > de6efc6fec4b2       43154ddb57a83       About a minute ago   Running             kube-proxy                0                   038c42970362d
	I0526 21:25:17.345487  527485 command_runner.go:124] > c8538106e966b       0369cf4303ffd       About a minute ago   Running             etcd                      0                   2ad404c6a9c44
	I0526 21:25:17.345506  527485 command_runner.go:124] > e6bb9bee7539a       ed2c44fbdd78b       About a minute ago   Running             kube-scheduler            0                   24fd8b8599a6e
	I0526 21:25:17.345524  527485 command_runner.go:124] > 2314e41b1b443       a27166429d98e       About a minute ago   Running             kube-controller-manager   0                   73ada73fbbf0b
	I0526 21:25:17.345537  527485 command_runner.go:124] > a0581c0e5409b       a8c2fdb8bf76e       About a minute ago   Running             kube-apiserver            0                   fe43674906f20
	I0526 21:25:17.346897  527485 logs.go:123] Gathering logs for kube-apiserver [a0581c0e5409bf38a71c61d4cf776a7fcd3dc38c24a5a6a52165552afd2bc85c] ...
	I0526 21:25:17.346916  527485 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 a0581c0e5409bf38a71c61d4cf776a7fcd3dc38c24a5a6a52165552afd2bc85c"
	I0526 21:25:17.366204  527485 command_runner.go:124] ! Flag --insecure-port has been deprecated, This flag has no effect now and will be removed in v1.24.
	I0526 21:25:17.366224  527485 command_runner.go:124] ! I0526 21:23:29.805604       1 server.go:632] external host was not specified, using 192.168.39.229
	I0526 21:25:17.366231  527485 command_runner.go:124] ! I0526 21:23:29.806982       1 server.go:182] Version: v1.20.2
	I0526 21:25:17.366239  527485 command_runner.go:124] ! I0526 21:23:30.593640       1 shared_informer.go:240] Waiting for caches to sync for node_authorizer
	I0526 21:25:17.366258  527485 command_runner.go:124] ! I0526 21:23:30.598821       1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0526 21:25:17.366279  527485 command_runner.go:124] ! I0526 21:23:30.598945       1 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
	I0526 21:25:17.366315  527485 command_runner.go:124] ! I0526 21:23:30.600954       1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0526 21:25:17.366349  527485 command_runner.go:124] ! I0526 21:23:30.601309       1 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
	I0526 21:25:17.366363  527485 command_runner.go:124] ! I0526 21:23:30.616590       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.366374  527485 command_runner.go:124] ! I0526 21:23:30.617065       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.366384  527485 command_runner.go:124] ! I0526 21:23:30.995013       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.366409  527485 command_runner.go:124] ! I0526 21:23:30.995139       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.366422  527485 command_runner.go:124] ! I0526 21:23:31.030659       1 client.go:360] parsed scheme: "passthrough"
	I0526 21:25:17.366436  527485 command_runner.go:124] ! I0526 21:23:31.031231       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0526 21:25:17.366448  527485 command_runner.go:124] ! I0526 21:23:31.031324       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0526 21:25:17.366458  527485 command_runner.go:124] ! I0526 21:23:31.032369       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.366470  527485 command_runner.go:124] ! I0526 21:23:31.032725       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.366476  527485 command_runner.go:124] ! I0526 21:23:31.143094       1 instance.go:289] Using reconciler: lease
	I0526 21:25:17.366484  527485 command_runner.go:124] ! I0526 21:23:31.148814       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.366493  527485 command_runner.go:124] ! I0526 21:23:31.148936       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.366501  527485 command_runner.go:124] ! I0526 21:23:31.164327       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.366510  527485 command_runner.go:124] ! I0526 21:23:31.164627       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.366520  527485 command_runner.go:124] ! I0526 21:23:31.183831       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.366533  527485 command_runner.go:124] ! I0526 21:23:31.184185       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.366544  527485 command_runner.go:124] ! I0526 21:23:31.203621       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.366555  527485 command_runner.go:124] ! I0526 21:23:31.204140       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.366563  527485 command_runner.go:124] ! I0526 21:23:31.218608       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.366571  527485 command_runner.go:124] ! I0526 21:23:31.218929       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.366580  527485 command_runner.go:124] ! I0526 21:23:31.235670       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.366589  527485 command_runner.go:124] ! I0526 21:23:31.235780       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.366597  527485 command_runner.go:124] ! I0526 21:23:31.248767       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.366623  527485 command_runner.go:124] ! I0526 21:23:31.248973       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.366660  527485 command_runner.go:124] ! I0526 21:23:31.270717       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.366670  527485 command_runner.go:124] ! I0526 21:23:31.272045       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.366680  527485 command_runner.go:124] ! I0526 21:23:31.287807       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.366691  527485 command_runner.go:124] ! I0526 21:23:31.288158       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.366698  527485 command_runner.go:124] ! I0526 21:23:31.302175       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.366707  527485 command_runner.go:124] ! I0526 21:23:31.302294       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.366716  527485 command_runner.go:124] ! I0526 21:23:31.318788       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.366725  527485 command_runner.go:124] ! I0526 21:23:31.318898       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.366733  527485 command_runner.go:124] ! I0526 21:23:31.340681       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.366742  527485 command_runner.go:124] ! I0526 21:23:31.341103       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.366751  527485 command_runner.go:124] ! I0526 21:23:31.364875       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.366760  527485 command_runner.go:124] ! I0526 21:23:31.365260       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.366768  527485 command_runner.go:124] ! I0526 21:23:31.375229       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.366777  527485 command_runner.go:124] ! I0526 21:23:31.375353       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.366786  527485 command_runner.go:124] ! I0526 21:23:31.384385       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.366795  527485 command_runner.go:124] ! I0526 21:23:31.384585       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.366806  527485 command_runner.go:124] ! I0526 21:23:31.392770       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.366818  527485 command_runner.go:124] ! I0526 21:23:31.392939       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.366828  527485 command_runner.go:124] ! I0526 21:23:31.406398       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.366841  527485 command_runner.go:124] ! I0526 21:23:31.406589       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.366849  527485 command_runner.go:124] ! I0526 21:23:31.421828       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.366858  527485 command_runner.go:124] ! I0526 21:23:31.422392       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.366868  527485 command_runner.go:124] ! I0526 21:23:31.434772       1 rest.go:131] the default service ipfamily for this cluster is: IPv4
	I0526 21:25:17.366875  527485 command_runner.go:124] ! I0526 21:23:31.530123       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.366885  527485 command_runner.go:124] ! I0526 21:23:31.530234       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.366893  527485 command_runner.go:124] ! I0526 21:23:31.542917       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.366902  527485 command_runner.go:124] ! I0526 21:23:31.543258       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.366911  527485 command_runner.go:124] ! I0526 21:23:31.558871       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.366921  527485 command_runner.go:124] ! I0526 21:23:31.558975       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.366930  527485 command_runner.go:124] ! I0526 21:23:31.578311       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.366939  527485 command_runner.go:124] ! I0526 21:23:31.578428       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.366948  527485 command_runner.go:124] ! I0526 21:23:31.579212       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.366957  527485 command_runner.go:124] ! I0526 21:23:31.579406       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.366966  527485 command_runner.go:124] ! I0526 21:23:31.593279       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.366975  527485 command_runner.go:124] ! I0526 21:23:31.593392       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.366983  527485 command_runner.go:124] ! I0526 21:23:31.609260       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.366993  527485 command_runner.go:124] ! I0526 21:23:31.609368       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.367001  527485 command_runner.go:124] ! I0526 21:23:31.626851       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.367010  527485 command_runner.go:124] ! I0526 21:23:31.626960       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.367033  527485 command_runner.go:124] ! I0526 21:23:31.653023       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.367056  527485 command_runner.go:124] ! I0526 21:23:31.653138       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.367066  527485 command_runner.go:124] ! I0526 21:23:31.662951       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.367076  527485 command_runner.go:124] ! I0526 21:23:31.663349       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.367085  527485 command_runner.go:124] ! I0526 21:23:31.683106       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.367094  527485 command_runner.go:124] ! I0526 21:23:31.684613       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.367104  527485 command_runner.go:124] ! I0526 21:23:31.700741       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.367115  527485 command_runner.go:124] ! I0526 21:23:31.701266       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.367124  527485 command_runner.go:124] ! I0526 21:23:31.722045       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.367133  527485 command_runner.go:124] ! I0526 21:23:31.722235       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.367141  527485 command_runner.go:124] ! I0526 21:23:31.736295       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.367150  527485 command_runner.go:124] ! I0526 21:23:31.737071       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.367157  527485 command_runner.go:124] ! I0526 21:23:31.751086       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.367166  527485 command_runner.go:124] ! I0526 21:23:31.751202       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.367175  527485 command_runner.go:124] ! I0526 21:23:31.767941       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.367184  527485 command_runner.go:124] ! I0526 21:23:31.768045       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.367192  527485 command_runner.go:124] ! I0526 21:23:31.784917       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.367201  527485 command_runner.go:124] ! I0526 21:23:31.785029       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.367209  527485 command_runner.go:124] ! I0526 21:23:31.802204       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.367218  527485 command_runner.go:124] ! I0526 21:23:31.802314       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.367227  527485 command_runner.go:124] ! I0526 21:23:31.817427       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.367241  527485 command_runner.go:124] ! I0526 21:23:31.817616       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.367252  527485 command_runner.go:124] ! I0526 21:23:31.837841       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.367265  527485 command_runner.go:124] ! I0526 21:23:31.837939       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.367277  527485 command_runner.go:124] ! I0526 21:23:31.860217       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.367293  527485 command_runner.go:124] ! I0526 21:23:31.861221       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.367307  527485 command_runner.go:124] ! I0526 21:23:31.871254       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.367324  527485 command_runner.go:124] ! I0526 21:23:31.872836       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.367336  527485 command_runner.go:124] ! I0526 21:23:31.884052       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.367347  527485 command_runner.go:124] ! I0526 21:23:31.884160       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.367353  527485 command_runner.go:124] ! I0526 21:23:31.898818       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.367368  527485 command_runner.go:124] ! I0526 21:23:31.898925       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.367383  527485 command_runner.go:124] ! I0526 21:23:31.913046       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.367398  527485 command_runner.go:124] ! I0526 21:23:31.913149       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.367411  527485 command_runner.go:124] ! I0526 21:23:31.925884       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.367424  527485 command_runner.go:124] ! I0526 21:23:31.925994       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.367434  527485 command_runner.go:124] ! I0526 21:23:31.939143       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.367443  527485 command_runner.go:124] ! I0526 21:23:31.939253       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.367455  527485 command_runner.go:124] ! I0526 21:23:31.954393       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.367494  527485 command_runner.go:124] ! I0526 21:23:31.956005       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.367508  527485 command_runner.go:124] ! I0526 21:23:31.964255       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.367522  527485 command_runner.go:124] ! I0526 21:23:31.964369       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.367529  527485 command_runner.go:124] ! I0526 21:23:31.980824       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.367539  527485 command_runner.go:124] ! I0526 21:23:31.980931       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.367548  527485 command_runner.go:124] ! I0526 21:23:31.998875       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.367558  527485 command_runner.go:124] ! I0526 21:23:31.998978       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.367566  527485 command_runner.go:124] ! I0526 21:23:32.014057       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.367575  527485 command_runner.go:124] ! I0526 21:23:32.014169       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.367584  527485 command_runner.go:124] ! I0526 21:23:32.027301       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.367595  527485 command_runner.go:124] ! I0526 21:23:32.027633       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.367603  527485 command_runner.go:124] ! I0526 21:23:32.046160       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.367613  527485 command_runner.go:124] ! I0526 21:23:32.046890       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.367622  527485 command_runner.go:124] ! I0526 21:23:32.068538       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.367631  527485 command_runner.go:124] ! I0526 21:23:32.069814       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.367639  527485 command_runner.go:124] ! I0526 21:23:32.087119       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.367648  527485 command_runner.go:124] ! I0526 21:23:32.087547       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.367657  527485 command_runner.go:124] ! I0526 21:23:32.097832       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.367668  527485 command_runner.go:124] ! I0526 21:23:32.097940       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.367676  527485 command_runner.go:124] ! I0526 21:23:32.107249       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.367685  527485 command_runner.go:124] ! I0526 21:23:32.107932       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.367694  527485 command_runner.go:124] ! I0526 21:23:32.119796       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.367703  527485 command_runner.go:124] ! I0526 21:23:32.119897       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.367711  527485 command_runner.go:124] ! I0526 21:23:32.128209       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.367720  527485 command_runner.go:124] ! I0526 21:23:32.128321       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.367728  527485 command_runner.go:124] ! I0526 21:23:32.138008       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.367737  527485 command_runner.go:124] ! I0526 21:23:32.138111       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.367746  527485 command_runner.go:124] ! I0526 21:23:32.160727       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.367755  527485 command_runner.go:124] ! I0526 21:23:32.160833       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.367764  527485 command_runner.go:124] ! I0526 21:23:32.186843       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.367773  527485 command_runner.go:124] ! I0526 21:23:32.186949       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.367781  527485 command_runner.go:124] ! I0526 21:23:32.198121       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.367791  527485 command_runner.go:124] ! I0526 21:23:32.198232       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.367799  527485 command_runner.go:124] ! I0526 21:23:32.206015       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.367814  527485 command_runner.go:124] ! I0526 21:23:32.206127       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.367824  527485 command_runner.go:124] ! I0526 21:23:32.222761       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.367834  527485 command_runner.go:124] ! I0526 21:23:32.223204       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.367842  527485 command_runner.go:124] ! I0526 21:23:32.232528       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.367852  527485 command_runner.go:124] ! I0526 21:23:32.232629       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.367860  527485 command_runner.go:124] ! I0526 21:23:32.245897       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.367885  527485 command_runner.go:124] ! I0526 21:23:32.246007       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.367897  527485 command_runner.go:124] ! I0526 21:23:32.263847       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.367919  527485 command_runner.go:124] ! I0526 21:23:32.263950       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.367927  527485 command_runner.go:124] ! I0526 21:23:32.275996       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.367936  527485 command_runner.go:124] ! I0526 21:23:32.276100       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.367945  527485 command_runner.go:124] ! I0526 21:23:32.286992       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.367954  527485 command_runner.go:124] ! I0526 21:23:32.288760       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.367962  527485 command_runner.go:124] ! I0526 21:23:32.300558       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.367971  527485 command_runner.go:124] ! I0526 21:23:32.300656       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.367982  527485 command_runner.go:124] ! W0526 21:23:32.466350       1 genericapiserver.go:419] Skipping API batch/v2alpha1 because it has no resources.
	I0526 21:25:17.367993  527485 command_runner.go:124] ! W0526 21:23:32.475974       1 genericapiserver.go:419] Skipping API discovery.k8s.io/v1alpha1 because it has no resources.
	I0526 21:25:17.368001  527485 command_runner.go:124] ! W0526 21:23:32.486620       1 genericapiserver.go:419] Skipping API node.k8s.io/v1alpha1 because it has no resources.
	I0526 21:25:17.368012  527485 command_runner.go:124] ! W0526 21:23:32.495038       1 genericapiserver.go:419] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
	I0526 21:25:17.368023  527485 command_runner.go:124] ! W0526 21:23:32.498634       1 genericapiserver.go:419] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
	I0526 21:25:17.368034  527485 command_runner.go:124] ! W0526 21:23:32.503834       1 genericapiserver.go:419] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
	I0526 21:25:17.368044  527485 command_runner.go:124] ! W0526 21:23:32.506839       1 genericapiserver.go:419] Skipping API flowcontrol.apiserver.k8s.io/v1alpha1 because it has no resources.
	I0526 21:25:17.368054  527485 command_runner.go:124] ! W0526 21:23:32.511920       1 genericapiserver.go:419] Skipping API apps/v1beta2 because it has no resources.
	I0526 21:25:17.368064  527485 command_runner.go:124] ! W0526 21:23:32.512155       1 genericapiserver.go:419] Skipping API apps/v1beta1 because it has no resources.
	I0526 21:25:17.368083  527485 command_runner.go:124] ! I0526 21:23:32.520325       1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0526 21:25:17.368106  527485 command_runner.go:124] ! I0526 21:23:32.520699       1 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
	I0526 21:25:17.368116  527485 command_runner.go:124] ! I0526 21:23:32.522294       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.368127  527485 command_runner.go:124] ! I0526 21:23:32.522675       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.368136  527485 command_runner.go:124] ! I0526 21:23:32.531035       1 client.go:360] parsed scheme: "endpoint"
	I0526 21:25:17.368145  527485 command_runner.go:124] ! I0526 21:23:32.531144       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0526 21:25:17.368157  527485 command_runner.go:124] ! I0526 21:23:34.690784       1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
	I0526 21:25:17.368166  527485 command_runner.go:124] ! I0526 21:23:34.691285       1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	I0526 21:25:17.368178  527485 command_runner.go:124] ! I0526 21:23:34.692130       1 dynamic_serving_content.go:130] Starting serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key
	I0526 21:25:17.368188  527485 command_runner.go:124] ! I0526 21:23:34.692740       1 secure_serving.go:197] Serving securely on [::]:8443
	I0526 21:25:17.368196  527485 command_runner.go:124] ! I0526 21:23:34.693343       1 apf_controller.go:261] Starting API Priority and Fairness config controller
	I0526 21:25:17.368204  527485 command_runner.go:124] ! I0526 21:23:34.693677       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I0526 21:25:17.368215  527485 command_runner.go:124] ! I0526 21:23:34.694744       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
	I0526 21:25:17.368224  527485 command_runner.go:124] ! I0526 21:23:34.694836       1 shared_informer.go:240] Waiting for caches to sync for cluster_authentication_trust_controller
	I0526 21:25:17.368247  527485 command_runner.go:124] ! I0526 21:23:34.694880       1 available_controller.go:475] Starting AvailableConditionController
	I0526 21:25:17.368258  527485 command_runner.go:124] ! I0526 21:23:34.694885       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
	I0526 21:25:17.368265  527485 command_runner.go:124] ! I0526 21:23:34.694904       1 autoregister_controller.go:141] Starting autoregister controller
	I0526 21:25:17.368274  527485 command_runner.go:124] ! I0526 21:23:34.694908       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0526 21:25:17.368281  527485 command_runner.go:124] ! I0526 21:23:34.696887       1 apiservice_controller.go:97] Starting APIServiceRegistrationController
	I0526 21:25:17.368292  527485 command_runner.go:124] ! I0526 21:23:34.697053       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I0526 21:25:17.368305  527485 command_runner.go:124] ! I0526 21:23:34.697670       1 dynamic_serving_content.go:130] Starting aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key
	I0526 21:25:17.368314  527485 command_runner.go:124] ! I0526 21:23:34.697935       1 controller.go:83] Starting OpenAPI AggregationController
	I0526 21:25:17.368323  527485 command_runner.go:124] ! I0526 21:23:34.698627       1 customresource_discovery_controller.go:209] Starting DiscoveryController
	I0526 21:25:17.368334  527485 command_runner.go:124] ! I0526 21:23:34.705120       1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	I0526 21:25:17.368344  527485 command_runner.go:124] ! I0526 21:23:34.705289       1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
	I0526 21:25:17.368354  527485 command_runner.go:124] ! I0526 21:23:34.706119       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0526 21:25:17.368362  527485 command_runner.go:124] ! I0526 21:23:34.706246       1 shared_informer.go:240] Waiting for caches to sync for crd-autoregister
	I0526 21:25:17.368376  527485 command_runner.go:124] ! E0526 21:23:34.733148       1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.39.229, ResourceVersion: 0, AdditionalErrorMsg: 
	I0526 21:25:17.368387  527485 command_runner.go:124] ! I0526 21:23:34.762565       1 controller.go:86] Starting OpenAPI controller
	I0526 21:25:17.368398  527485 command_runner.go:124] ! I0526 21:23:34.762983       1 naming_controller.go:291] Starting NamingConditionController
	I0526 21:25:17.368413  527485 command_runner.go:124] ! I0526 21:23:34.763230       1 establishing_controller.go:76] Starting EstablishingController
	I0526 21:25:17.368428  527485 command_runner.go:124] ! I0526 21:23:34.763815       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0526 21:25:17.368441  527485 command_runner.go:124] ! I0526 21:23:34.764676       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0526 21:25:17.368451  527485 command_runner.go:124] ! I0526 21:23:34.765003       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0526 21:25:17.368458  527485 command_runner.go:124] ! I0526 21:23:34.894833       1 shared_informer.go:247] Caches are synced for node_authorizer 
	I0526 21:25:17.368467  527485 command_runner.go:124] ! I0526 21:23:34.895159       1 cache.go:39] Caches are synced for autoregister controller
	I0526 21:25:17.368477  527485 command_runner.go:124] ! I0526 21:23:34.895543       1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller 
	I0526 21:25:17.368486  527485 command_runner.go:124] ! I0526 21:23:34.895893       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0526 21:25:17.368496  527485 command_runner.go:124] ! I0526 21:23:34.897085       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0526 21:25:17.368504  527485 command_runner.go:124] ! I0526 21:23:34.899871       1 apf_controller.go:266] Running API Priority and Fairness config worker
	I0526 21:25:17.368513  527485 command_runner.go:124] ! I0526 21:23:34.907242       1 shared_informer.go:247] Caches are synced for crd-autoregister 
	I0526 21:25:17.368524  527485 command_runner.go:124] ! I0526 21:23:35.022751       1 controller.go:609] quota admission added evaluator for: namespaces
	I0526 21:25:17.368541  527485 command_runner.go:124] ! I0526 21:23:35.690855       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0526 21:25:17.368560  527485 command_runner.go:124] ! I0526 21:23:35.691097       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0526 21:25:17.368580  527485 command_runner.go:124] ! I0526 21:23:35.708402       1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000
	I0526 21:25:17.368598  527485 command_runner.go:124] ! I0526 21:23:35.726885       1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
	I0526 21:25:17.368613  527485 command_runner.go:124] ! I0526 21:23:35.727088       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I0526 21:25:17.368625  527485 command_runner.go:124] ! I0526 21:23:36.334571       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0526 21:25:17.368637  527485 command_runner.go:124] ! I0526 21:23:36.389004       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0526 21:25:17.368646  527485 command_runner.go:124] ! W0526 21:23:36.485873       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.39.229]
	I0526 21:25:17.368658  527485 command_runner.go:124] ! I0526 21:23:36.487435       1 controller.go:609] quota admission added evaluator for: endpoints
	I0526 21:25:17.368668  527485 command_runner.go:124] ! I0526 21:23:36.499209       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0526 21:25:17.368678  527485 command_runner.go:124] ! I0526 21:23:37.294654       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I0526 21:25:17.368686  527485 command_runner.go:124] ! I0526 21:23:38.382157       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I0526 21:25:17.368695  527485 command_runner.go:124] ! I0526 21:23:38.454712       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I0526 21:25:17.368705  527485 command_runner.go:124] ! I0526 21:23:43.955877       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I0526 21:25:17.368713  527485 command_runner.go:124] ! I0526 21:23:53.285833       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I0526 21:25:17.368723  527485 command_runner.go:124] ! I0526 21:23:53.338274       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I0526 21:25:17.368729  527485 command_runner.go:124] ! I0526 21:24:01.973387       1 client.go:360] parsed scheme: "passthrough"
	I0526 21:25:17.368740  527485 command_runner.go:124] ! I0526 21:24:01.973608       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0526 21:25:17.368750  527485 command_runner.go:124] ! I0526 21:24:01.973627       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0526 21:25:17.368758  527485 command_runner.go:124] ! I0526 21:24:43.497572       1 client.go:360] parsed scheme: "passthrough"
	I0526 21:25:17.368770  527485 command_runner.go:124] ! I0526 21:24:43.497775       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0526 21:25:17.368779  527485 command_runner.go:124] ! I0526 21:24:43.498072       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0526 21:25:17.379028  527485 logs.go:123] Gathering logs for coredns [a9593dff4428d4d0f5cc31832cefb42bf39b62a98d32d295dd9502a1ef6d307a] ...
	I0526 21:25:17.379043  527485 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 a9593dff4428d4d0f5cc31832cefb42bf39b62a98d32d295dd9502a1ef6d307a"
	I0526 21:25:17.398421  527485 command_runner.go:124] > .:53
	I0526 21:25:17.398439  527485 command_runner.go:124] > [INFO] plugin/reload: Running configuration MD5 = 8f51b271a18f2ce6fcaee5f1cfda3ed0
	I0526 21:25:17.398444  527485 command_runner.go:124] > CoreDNS-1.7.0
	I0526 21:25:17.398451  527485 command_runner.go:124] > linux/amd64, go1.14.4, f59c03d
	I0526 21:25:17.398533  527485 logs.go:123] Gathering logs for kube-scheduler [e6bb9bee7539aa8f41eb665706dd077c17b8c28a54090cf4fffe439978194c08] ...
	I0526 21:25:17.398545  527485 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 e6bb9bee7539aa8f41eb665706dd077c17b8c28a54090cf4fffe439978194c08"
	I0526 21:25:17.416827  527485 command_runner.go:124] ! I0526 21:23:31.228401       1 serving.go:331] Generated self-signed cert in-memory
	I0526 21:25:17.416933  527485 command_runner.go:124] ! W0526 21:23:34.792981       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0526 21:25:17.417081  527485 command_runner.go:124] ! W0526 21:23:34.795544       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0526 21:25:17.417193  527485 command_runner.go:124] ! W0526 21:23:34.796410       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0526 21:25:17.417287  527485 command_runner.go:124] ! W0526 21:23:34.796897       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0526 21:25:17.417561  527485 command_runner.go:124] ! I0526 21:23:34.861412       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0526 21:25:17.417657  527485 command_runner.go:124] ! I0526 21:23:34.862415       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0526 21:25:17.417725  527485 command_runner.go:124] ! I0526 21:23:34.861578       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0526 21:25:17.418064  527485 command_runner.go:124] ! I0526 21:23:34.861594       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I0526 21:25:17.418159  527485 command_runner.go:124] ! E0526 21:23:34.865256       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0526 21:25:17.418425  527485 command_runner.go:124] ! E0526 21:23:34.871182       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0526 21:25:17.418500  527485 command_runner.go:124] ! E0526 21:23:34.871367       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0526 21:25:17.418601  527485 command_runner.go:124] ! E0526 21:23:34.871423       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0526 21:25:17.418846  527485 command_runner.go:124] ! E0526 21:23:34.873602       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0526 21:25:17.418951  527485 command_runner.go:124] ! E0526 21:23:34.873877       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0526 21:25:17.419299  527485 command_runner.go:124] ! E0526 21:23:34.874313       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0526 21:25:17.419377  527485 command_runner.go:124] ! E0526 21:23:34.874540       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0526 21:25:17.419475  527485 command_runner.go:124] ! E0526 21:23:34.875162       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0526 21:25:17.419623  527485 command_runner.go:124] ! E0526 21:23:34.875282       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0526 21:25:17.419694  527485 command_runner.go:124] ! E0526 21:23:34.878224       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0526 21:25:17.420075  527485 command_runner.go:124] ! E0526 21:23:34.878386       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0526 21:25:17.420144  527485 command_runner.go:124] ! E0526 21:23:35.699206       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0526 21:25:17.420398  527485 command_runner.go:124] ! E0526 21:23:35.756603       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0526 21:25:17.420484  527485 command_runner.go:124] ! E0526 21:23:35.804897       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0526 21:25:17.420586  527485 command_runner.go:124] ! E0526 21:23:35.812802       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0526 21:25:17.420682  527485 command_runner.go:124] ! E0526 21:23:35.981887       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0526 21:25:17.421022  527485 command_runner.go:124] ! E0526 21:23:36.079577       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0526 21:25:17.421078  527485 command_runner.go:124] ! I0526 21:23:38.862952       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	I0526 21:25:17.424942  527485 logs.go:123] Gathering logs for kube-controller-manager [2314e41b1b44395e7b5a7c32d6859941d4a5db601171f0dd18ddefb571d25f18] ...
	I0526 21:25:17.424956  527485 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 2314e41b1b44395e7b5a7c32d6859941d4a5db601171f0dd18ddefb571d25f18"
	I0526 21:25:17.444078  527485 command_runner.go:124] ! Flag --port has been deprecated, see --secure-port instead.
	I0526 21:25:17.444154  527485 command_runner.go:124] ! I0526 21:23:30.770698       1 serving.go:331] Generated self-signed cert in-memory
	I0526 21:25:17.444506  527485 command_runner.go:124] ! I0526 21:23:31.105740       1 controllermanager.go:176] Version: v1.20.2
	I0526 21:25:17.444643  527485 command_runner.go:124] ! I0526 21:23:31.110528       1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
	I0526 21:25:17.444783  527485 command_runner.go:124] ! I0526 21:23:31.110685       1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	I0526 21:25:17.445172  527485 command_runner.go:124] ! I0526 21:23:31.111406       1 secure_serving.go:197] Serving securely on 127.0.0.1:10257
	I0526 21:25:17.445292  527485 command_runner.go:124] ! I0526 21:23:31.111685       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I0526 21:25:17.445688  527485 command_runner.go:124] ! I0526 21:23:37.283320       1 shared_informer.go:240] Waiting for caches to sync for tokens
	I0526 21:25:17.445922  527485 command_runner.go:124] ! I0526 21:23:37.384858       1 shared_informer.go:247] Caches are synced for tokens 
	I0526 21:25:17.446060  527485 command_runner.go:124] ! I0526 21:23:37.398260       1 controllermanager.go:554] Started "csrcleaner"
	I0526 21:25:17.446406  527485 command_runner.go:124] ! I0526 21:23:37.398681       1 cleaner.go:82] Starting CSR cleaner controller
	I0526 21:25:17.446539  527485 command_runner.go:124] ! I0526 21:23:37.436326       1 controllermanager.go:554] Started "tokencleaner"
	I0526 21:25:17.446909  527485 command_runner.go:124] ! I0526 21:23:37.436948       1 tokencleaner.go:118] Starting token cleaner controller
	I0526 21:25:17.447031  527485 command_runner.go:124] ! I0526 21:23:37.437051       1 shared_informer.go:240] Waiting for caches to sync for token_cleaner
	I0526 21:25:17.447231  527485 command_runner.go:124] ! I0526 21:23:37.437060       1 shared_informer.go:247] Caches are synced for token_cleaner 
	I0526 21:25:17.447450  527485 command_runner.go:124] ! E0526 21:23:37.458692       1 core.go:92] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail
	I0526 21:25:17.447548  527485 command_runner.go:124] ! W0526 21:23:37.458788       1 controllermanager.go:546] Skipping "service"
	I0526 21:25:17.447998  527485 command_runner.go:124] ! I0526 21:23:37.485897       1 controllermanager.go:554] Started "root-ca-cert-publisher"
	I0526 21:25:17.448078  527485 command_runner.go:124] ! W0526 21:23:37.486148       1 controllermanager.go:546] Skipping "ephemeral-volume"
	I0526 21:25:17.448448  527485 command_runner.go:124] ! I0526 21:23:37.486971       1 publisher.go:98] Starting root CA certificate configmap publisher
	I0526 21:25:17.448545  527485 command_runner.go:124] ! I0526 21:23:37.487325       1 shared_informer.go:240] Waiting for caches to sync for crt configmap
	I0526 21:25:17.449122  527485 command_runner.go:124] ! I0526 21:23:37.514186       1 controllermanager.go:554] Started "endpointslicemirroring"
	I0526 21:25:17.449148  527485 command_runner.go:124] ! I0526 21:23:37.515190       1 endpointslicemirroring_controller.go:211] Starting EndpointSliceMirroring controller
	I0526 21:25:17.449162  527485 command_runner.go:124] ! I0526 21:23:37.515570       1 shared_informer.go:240] Waiting for caches to sync for endpoint_slice_mirroring
	I0526 21:25:17.449180  527485 command_runner.go:124] ! I0526 21:23:37.550580       1 controllermanager.go:554] Started "replicaset"
	I0526 21:25:17.449192  527485 command_runner.go:124] ! I0526 21:23:37.551218       1 replica_set.go:182] Starting replicaset controller
	I0526 21:25:17.449208  527485 command_runner.go:124] ! I0526 21:23:37.551414       1 shared_informer.go:240] Waiting for caches to sync for ReplicaSet
	I0526 21:25:17.449224  527485 command_runner.go:124] ! I0526 21:23:37.987267       1 controllermanager.go:554] Started "horizontalpodautoscaling"
	I0526 21:25:17.449238  527485 command_runner.go:124] ! I0526 21:23:37.988181       1 horizontal.go:169] Starting HPA controller
	I0526 21:25:17.449254  527485 command_runner.go:124] ! I0526 21:23:37.988418       1 shared_informer.go:240] Waiting for caches to sync for HPA
	I0526 21:25:17.449276  527485 command_runner.go:124] ! I0526 21:23:38.238507       1 controllermanager.go:554] Started "persistentvolume-binder"
	I0526 21:25:17.449289  527485 command_runner.go:124] ! I0526 21:23:38.238941       1 pv_controller_base.go:307] Starting persistent volume controller
	I0526 21:25:17.449306  527485 command_runner.go:124] ! I0526 21:23:38.238953       1 shared_informer.go:240] Waiting for caches to sync for persistent volume
	I0526 21:25:17.449349  527485 command_runner.go:124] ! I0526 21:23:38.636899       1 controllermanager.go:554] Started "garbagecollector"
	I0526 21:25:17.449367  527485 command_runner.go:124] ! I0526 21:23:38.636902       1 garbagecollector.go:142] Starting garbage collector controller
	I0526 21:25:17.449380  527485 command_runner.go:124] ! I0526 21:23:38.636960       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	I0526 21:25:17.449394  527485 command_runner.go:124] ! I0526 21:23:38.637525       1 graph_builder.go:289] GraphBuilder running
	I0526 21:25:17.449405  527485 command_runner.go:124] ! I0526 21:23:39.037283       1 controllermanager.go:554] Started "disruption"
	I0526 21:25:17.449421  527485 command_runner.go:124] ! I0526 21:23:39.037574       1 disruption.go:331] Starting disruption controller
	I0526 21:25:17.449437  527485 command_runner.go:124] ! I0526 21:23:39.037585       1 shared_informer.go:240] Waiting for caches to sync for disruption
	I0526 21:25:17.449453  527485 command_runner.go:124] ! I0526 21:23:39.286540       1 controllermanager.go:554] Started "clusterrole-aggregation"
	I0526 21:25:17.449470  527485 command_runner.go:124] ! I0526 21:23:39.286598       1 clusterroleaggregation_controller.go:149] Starting ClusterRoleAggregator
	I0526 21:25:17.449490  527485 command_runner.go:124] ! I0526 21:23:39.286605       1 shared_informer.go:240] Waiting for caches to sync for ClusterRoleAggregator
	I0526 21:25:17.449506  527485 command_runner.go:124] ! I0526 21:23:39.537304       1 controllermanager.go:554] Started "pvc-protection"
	I0526 21:25:17.449522  527485 command_runner.go:124] ! I0526 21:23:39.537579       1 pvc_protection_controller.go:110] Starting PVC protection controller
	I0526 21:25:17.449540  527485 command_runner.go:124] ! I0526 21:23:39.537670       1 shared_informer.go:240] Waiting for caches to sync for PVC protection
	I0526 21:25:17.449556  527485 command_runner.go:124] ! I0526 21:23:39.786982       1 controllermanager.go:554] Started "pv-protection"
	I0526 21:25:17.449572  527485 command_runner.go:124] ! I0526 21:23:39.787110       1 pv_protection_controller.go:83] Starting PV protection controller
	I0526 21:25:17.449586  527485 command_runner.go:124] ! I0526 21:23:39.787118       1 shared_informer.go:240] Waiting for caches to sync for PV protection
	I0526 21:25:17.449601  527485 command_runner.go:124] ! I0526 21:23:40.036383       1 controllermanager.go:554] Started "endpoint"
	I0526 21:25:17.449614  527485 command_runner.go:124] ! I0526 21:23:40.036415       1 endpoints_controller.go:184] Starting endpoint controller
	I0526 21:25:17.449629  527485 command_runner.go:124] ! I0526 21:23:40.037058       1 shared_informer.go:240] Waiting for caches to sync for endpoint
	I0526 21:25:17.449648  527485 command_runner.go:124] ! I0526 21:23:40.288607       1 controllermanager.go:554] Started "podgc"
	I0526 21:25:17.449663  527485 command_runner.go:124] ! I0526 21:23:40.288827       1 gc_controller.go:89] Starting GC controller
	I0526 21:25:17.449678  527485 command_runner.go:124] ! I0526 21:23:40.289411       1 shared_informer.go:240] Waiting for caches to sync for GC
	I0526 21:25:17.449702  527485 command_runner.go:124] ! W0526 21:23:40.988861       1 shared_informer.go:494] resyncPeriod 13h30m7.5724073s is smaller than resyncCheckPeriod 19h40m47.70464655s and the informer has already started. Changing it to 19h40m47.70464655s
	I0526 21:25:17.449720  527485 command_runner.go:124] ! I0526 21:23:40.989960       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for serviceaccounts
	I0526 21:25:17.449741  527485 command_runner.go:124] ! I0526 21:23:40.990215       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for daemonsets.apps
	I0526 21:25:17.449760  527485 command_runner.go:124] ! I0526 21:23:40.990426       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for cronjobs.batch
	I0526 21:25:17.449781  527485 command_runner.go:124] ! I0526 21:23:40.990971       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for rolebindings.rbac.authorization.k8s.io
	I0526 21:25:17.449802  527485 command_runner.go:124] ! I0526 21:23:40.991569       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for horizontalpodautoscalers.autoscaling
	I0526 21:25:17.449822  527485 command_runner.go:124] ! I0526 21:23:40.991963       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for poddisruptionbudgets.policy
	I0526 21:25:17.449840  527485 command_runner.go:124] ! I0526 21:23:40.992141       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for jobs.batch
	I0526 21:25:17.449860  527485 command_runner.go:124] ! I0526 21:23:40.992301       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for endpointslices.discovery.k8s.io
	I0526 21:25:17.449879  527485 command_runner.go:124] ! I0526 21:23:40.992532       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for podtemplates
	I0526 21:25:17.449903  527485 command_runner.go:124] ! W0526 21:23:40.992690       1 shared_informer.go:494] resyncPeriod 13h37m25.694603534s is smaller than resyncCheckPeriod 19h40m47.70464655s and the informer has already started. Changing it to 19h40m47.70464655s
	I0526 21:25:17.449923  527485 command_runner.go:124] ! I0526 21:23:40.993075       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for controllerrevisions.apps
	I0526 21:25:17.449943  527485 command_runner.go:124] ! I0526 21:23:40.993243       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for networkpolicies.networking.k8s.io
	I0526 21:25:17.449962  527485 command_runner.go:124] ! I0526 21:23:40.993580       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for limitranges
	I0526 21:25:17.449981  527485 command_runner.go:124] ! I0526 21:23:40.993747       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for ingresses.networking.k8s.io
	I0526 21:25:17.450033  527485 command_runner.go:124] ! I0526 21:23:40.993780       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for ingresses.extensions
	I0526 21:25:17.450054  527485 command_runner.go:124] ! I0526 21:23:40.993805       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for leases.coordination.k8s.io
	I0526 21:25:17.450072  527485 command_runner.go:124] ! I0526 21:23:40.993841       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for statefulsets.apps
	I0526 21:25:17.450091  527485 command_runner.go:124] ! I0526 21:23:40.993861       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for replicasets.apps
	I0526 21:25:17.450109  527485 command_runner.go:124] ! I0526 21:23:40.993876       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for deployments.apps
	I0526 21:25:17.450127  527485 command_runner.go:124] ! I0526 21:23:40.993891       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for endpoints
	I0526 21:25:17.450145  527485 command_runner.go:124] ! I0526 21:23:40.993951       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for events.events.k8s.io
	I0526 21:25:17.450163  527485 command_runner.go:124] ! I0526 21:23:40.993980       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for roles.rbac.authorization.k8s.io
	I0526 21:25:17.450178  527485 command_runner.go:124] ! I0526 21:23:40.994082       1 controllermanager.go:554] Started "resourcequota"
	I0526 21:25:17.450210  527485 command_runner.go:124] ! I0526 21:23:40.994178       1 resource_quota_controller.go:273] Starting resource quota controller
	I0526 21:25:17.450227  527485 command_runner.go:124] ! I0526 21:23:40.994191       1 shared_informer.go:240] Waiting for caches to sync for resource quota
	I0526 21:25:17.450242  527485 command_runner.go:124] ! I0526 21:23:40.994219       1 resource_quota_monitor.go:304] QuotaMonitor running
	I0526 21:25:17.450257  527485 command_runner.go:124] ! I0526 21:23:41.028175       1 controllermanager.go:554] Started "namespace"
	I0526 21:25:17.450278  527485 command_runner.go:124] ! I0526 21:23:41.028716       1 namespace_controller.go:200] Starting namespace controller
	I0526 21:25:17.450294  527485 command_runner.go:124] ! I0526 21:23:41.028992       1 shared_informer.go:240] Waiting for caches to sync for namespace
	I0526 21:25:17.450308  527485 command_runner.go:124] ! I0526 21:23:41.051981       1 controllermanager.go:554] Started "ttl"
	I0526 21:25:17.450323  527485 command_runner.go:124] ! I0526 21:23:41.052926       1 ttl_controller.go:121] Starting TTL controller
	I0526 21:25:17.450338  527485 command_runner.go:124] ! I0526 21:23:41.053383       1 shared_informer.go:240] Waiting for caches to sync for TTL
	I0526 21:25:17.450353  527485 command_runner.go:124] ! I0526 21:23:41.289145       1 controllermanager.go:554] Started "attachdetach"
	I0526 21:25:17.450369  527485 command_runner.go:124] ! W0526 21:23:41.289246       1 controllermanager.go:546] Skipping "ttl-after-finished"
	I0526 21:25:17.450386  527485 command_runner.go:124] ! I0526 21:23:41.289282       1 attach_detach_controller.go:328] Starting attach detach controller
	I0526 21:25:17.450406  527485 command_runner.go:124] ! I0526 21:23:41.289291       1 shared_informer.go:240] Waiting for caches to sync for attach detach
	I0526 21:25:17.450421  527485 command_runner.go:124] ! I0526 21:23:41.537362       1 controllermanager.go:554] Started "serviceaccount"
	I0526 21:25:17.450438  527485 command_runner.go:124] ! I0526 21:23:41.537403       1 serviceaccounts_controller.go:117] Starting service account controller
	I0526 21:25:17.450454  527485 command_runner.go:124] ! I0526 21:23:41.538137       1 shared_informer.go:240] Waiting for caches to sync for service account
	I0526 21:25:17.450469  527485 command_runner.go:124] ! I0526 21:23:41.787243       1 controllermanager.go:554] Started "deployment"
	I0526 21:25:17.450484  527485 command_runner.go:124] ! I0526 21:23:41.788023       1 deployment_controller.go:153] Starting deployment controller
	I0526 21:25:17.450500  527485 command_runner.go:124] ! I0526 21:23:41.790417       1 shared_informer.go:240] Waiting for caches to sync for deployment
	I0526 21:25:17.450515  527485 command_runner.go:124] ! I0526 21:23:41.936235       1 controllermanager.go:554] Started "csrapproving"
	I0526 21:25:17.450532  527485 command_runner.go:124] ! I0526 21:23:41.936293       1 certificate_controller.go:118] Starting certificate controller "csrapproving"
	I0526 21:25:17.450550  527485 command_runner.go:124] ! I0526 21:23:41.936301       1 shared_informer.go:240] Waiting for caches to sync for certificate-csrapproving
	I0526 21:25:17.450570  527485 command_runner.go:124] ! I0526 21:23:42.137381       1 request.go:655] Throttling request took 1.048213324s, request: GET:https://192.168.39.229:8443/apis/extensions/v1beta1?timeout=32s
	I0526 21:25:17.450586  527485 command_runner.go:124] ! I0526 21:23:42.189224       1 node_ipam_controller.go:91] Sending events to api server.
	I0526 21:25:17.450601  527485 command_runner.go:124] ! I0526 21:23:52.210125       1 range_allocator.go:82] Sending events to api server.
	I0526 21:25:17.450621  527485 command_runner.go:124] ! I0526 21:23:52.211056       1 range_allocator.go:116] No Secondary Service CIDR provided. Skipping filtering out secondary service addresses.
	I0526 21:25:17.450636  527485 command_runner.go:124] ! I0526 21:23:52.211333       1 controllermanager.go:554] Started "nodeipam"
	I0526 21:25:17.450657  527485 command_runner.go:124] ! W0526 21:23:52.211708       1 core.go:246] configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes.
	I0526 21:25:17.450671  527485 command_runner.go:124] ! W0526 21:23:52.212021       1 controllermanager.go:546] Skipping "route"
	I0526 21:25:17.450686  527485 command_runner.go:124] ! I0526 21:23:52.212292       1 node_ipam_controller.go:159] Starting ipam controller
	I0526 21:25:17.450701  527485 command_runner.go:124] ! I0526 21:23:52.212876       1 shared_informer.go:240] Waiting for caches to sync for node
	I0526 21:25:17.450753  527485 command_runner.go:124] ! I0526 21:23:52.227871       1 node_lifecycle_controller.go:77] Sending events to api server
	I0526 21:25:17.450810  527485 command_runner.go:124] ! E0526 21:23:52.227991       1 core.go:232] failed to start cloud node lifecycle controller: no cloud provider provided
	I0526 21:25:17.450823  527485 command_runner.go:124] ! W0526 21:23:52.228003       1 controllermanager.go:546] Skipping "cloud-node-lifecycle"
	I0526 21:25:17.450839  527485 command_runner.go:124] ! I0526 21:23:52.257128       1 controllermanager.go:554] Started "persistentvolume-expander"
	I0526 21:25:17.450858  527485 command_runner.go:124] ! I0526 21:23:52.257967       1 expand_controller.go:310] Starting expand controller
	I0526 21:25:17.450874  527485 command_runner.go:124] ! I0526 21:23:52.258344       1 shared_informer.go:240] Waiting for caches to sync for expand
	I0526 21:25:17.450890  527485 command_runner.go:124] ! I0526 21:23:52.287731       1 controllermanager.go:554] Started "endpointslice"
	I0526 21:25:17.450907  527485 command_runner.go:124] ! I0526 21:23:52.287941       1 endpointslice_controller.go:237] Starting endpoint slice controller
	I0526 21:25:17.450923  527485 command_runner.go:124] ! I0526 21:23:52.287950       1 shared_informer.go:240] Waiting for caches to sync for endpoint_slice
	I0526 21:25:17.450936  527485 command_runner.go:124] ! I0526 21:23:52.334629       1 controllermanager.go:554] Started "daemonset"
	I0526 21:25:17.450952  527485 command_runner.go:124] ! I0526 21:23:52.334789       1 daemon_controller.go:285] Starting daemon sets controller
	I0526 21:25:17.450967  527485 command_runner.go:124] ! I0526 21:23:52.334797       1 shared_informer.go:240] Waiting for caches to sync for daemon sets
	I0526 21:25:17.450980  527485 command_runner.go:124] ! I0526 21:23:52.366633       1 controllermanager.go:554] Started "statefulset"
	I0526 21:25:17.450997  527485 command_runner.go:124] ! I0526 21:23:52.366920       1 stateful_set.go:146] Starting stateful set controller
	I0526 21:25:17.451014  527485 command_runner.go:124] ! I0526 21:23:52.367009       1 shared_informer.go:240] Waiting for caches to sync for stateful set
	I0526 21:25:17.451029  527485 command_runner.go:124] ! I0526 21:23:52.395670       1 controllermanager.go:554] Started "cronjob"
	I0526 21:25:17.451044  527485 command_runner.go:124] ! I0526 21:23:52.395842       1 cronjob_controller.go:96] Starting CronJob Manager
	I0526 21:25:17.451061  527485 command_runner.go:124] ! I0526 21:23:52.416080       1 certificate_controller.go:118] Starting certificate controller "csrsigning-kubelet-serving"
	I0526 21:25:17.451078  527485 command_runner.go:124] ! I0526 21:23:52.416256       1 shared_informer.go:240] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0526 21:25:17.451098  527485 command_runner.go:124] ! I0526 21:23:52.416385       1 dynamic_serving_content.go:130] Starting csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key
	I0526 21:25:17.451120  527485 command_runner.go:124] ! I0526 21:23:52.416862       1 certificate_controller.go:118] Starting certificate controller "csrsigning-kubelet-client"
	I0526 21:25:17.451140  527485 command_runner.go:124] ! I0526 21:23:52.416958       1 shared_informer.go:240] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0526 21:25:17.451160  527485 command_runner.go:124] ! I0526 21:23:52.416975       1 dynamic_serving_content.go:130] Starting csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key
	I0526 21:25:17.451178  527485 command_runner.go:124] ! I0526 21:23:52.417715       1 certificate_controller.go:118] Starting certificate controller "csrsigning-kube-apiserver-client"
	I0526 21:25:17.451196  527485 command_runner.go:124] ! I0526 21:23:52.417882       1 shared_informer.go:240] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0526 21:25:17.451215  527485 command_runner.go:124] ! I0526 21:23:52.418025       1 dynamic_serving_content.go:130] Starting csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key
	I0526 21:25:17.451229  527485 command_runner.go:124] ! I0526 21:23:52.418373       1 controllermanager.go:554] Started "csrsigning"
	I0526 21:25:17.451243  527485 command_runner.go:124] ! I0526 21:23:52.418419       1 certificate_controller.go:118] Starting certificate controller "csrsigning-legacy-unknown"
	I0526 21:25:17.451268  527485 command_runner.go:124] ! I0526 21:23:52.418799       1 dynamic_serving_content.go:130] Starting csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key
	I0526 21:25:17.451286  527485 command_runner.go:124] ! I0526 21:23:52.418805       1 shared_informer.go:240] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0526 21:25:17.451302  527485 command_runner.go:124] ! I0526 21:23:52.515732       1 controllermanager.go:554] Started "bootstrapsigner"
	I0526 21:25:17.451318  527485 command_runner.go:124] ! I0526 21:23:52.516431       1 shared_informer.go:240] Waiting for caches to sync for bootstrap_signer
	I0526 21:25:17.451333  527485 command_runner.go:124] ! I0526 21:23:52.765741       1 controllermanager.go:554] Started "replicationcontroller"
	I0526 21:25:17.451348  527485 command_runner.go:124] ! I0526 21:23:52.765769       1 replica_set.go:182] Starting replicationcontroller controller
	I0526 21:25:17.451364  527485 command_runner.go:124] ! I0526 21:23:52.765867       1 shared_informer.go:240] Waiting for caches to sync for ReplicationController
	I0526 21:25:17.451381  527485 command_runner.go:124] ! I0526 21:23:52.915756       1 node_lifecycle_controller.go:380] Sending events to api server.
	I0526 21:25:17.451395  527485 command_runner.go:124] ! I0526 21:23:52.916150       1 taint_manager.go:163] Sending events to api server.
	I0526 21:25:17.451410  527485 command_runner.go:124] ! I0526 21:23:52.916342       1 node_lifecycle_controller.go:508] Controller will reconcile labels.
	I0526 21:25:17.451419  527485 command_runner.go:124] ! I0526 21:23:52.916386       1 controllermanager.go:554] Started "nodelifecycle"
	I0526 21:25:17.451427  527485 command_runner.go:124] ! I0526 21:23:52.916749       1 node_lifecycle_controller.go:542] Starting node controller
	I0526 21:25:17.451441  527485 command_runner.go:124] ! I0526 21:23:52.916921       1 shared_informer.go:240] Waiting for caches to sync for taint
	I0526 21:25:17.451451  527485 command_runner.go:124] ! I0526 21:23:53.165965       1 controllermanager.go:554] Started "job"
	I0526 21:25:17.451459  527485 command_runner.go:124] ! I0526 21:23:53.166025       1 job_controller.go:148] Starting job controller
	I0526 21:25:17.451468  527485 command_runner.go:124] ! I0526 21:23:53.167211       1 shared_informer.go:240] Waiting for caches to sync for job
	I0526 21:25:17.451479  527485 command_runner.go:124] ! I0526 21:23:53.170385       1 shared_informer.go:240] Waiting for caches to sync for resource quota
	I0526 21:25:17.451498  527485 command_runner.go:124] ! W0526 21:23:53.178965       1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-20210526212238-510955" does not exist
	I0526 21:25:17.451509  527485 command_runner.go:124] ! I0526 21:23:53.213010       1 shared_informer.go:247] Caches are synced for node 
	I0526 21:25:17.451517  527485 command_runner.go:124] ! I0526 21:23:53.213735       1 range_allocator.go:172] Starting range CIDR allocator
	I0526 21:25:17.451527  527485 command_runner.go:124] ! I0526 21:23:53.214071       1 shared_informer.go:240] Waiting for caches to sync for cidrallocator
	I0526 21:25:17.451537  527485 command_runner.go:124] ! I0526 21:23:53.214233       1 shared_informer.go:247] Caches are synced for cidrallocator 
	I0526 21:25:17.451549  527485 command_runner.go:124] ! I0526 21:23:53.215982       1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring 
	I0526 21:25:17.451561  527485 command_runner.go:124] ! I0526 21:23:53.216587       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-serving 
	I0526 21:25:17.451571  527485 command_runner.go:124] ! I0526 21:23:53.217085       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-client 
	I0526 21:25:17.451604  527485 command_runner.go:124] ! I0526 21:23:53.217522       1 shared_informer.go:247] Caches are synced for bootstrap_signer 
	I0526 21:25:17.451616  527485 command_runner.go:124] ! I0526 21:23:53.218215       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kube-apiserver-client 
	I0526 21:25:17.451625  527485 command_runner.go:124] ! I0526 21:23:53.218891       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-legacy-unknown 
	I0526 21:25:17.451634  527485 command_runner.go:124] ! I0526 21:23:53.229560       1 shared_informer.go:247] Caches are synced for namespace 
	I0526 21:25:17.451644  527485 command_runner.go:124] ! I0526 21:23:53.235029       1 shared_informer.go:247] Caches are synced for daemon sets 
	I0526 21:25:17.451653  527485 command_runner.go:124] ! I0526 21:23:53.238654       1 shared_informer.go:247] Caches are synced for service account 
	I0526 21:25:17.451671  527485 command_runner.go:124] ! I0526 21:23:53.240824       1 shared_informer.go:247] Caches are synced for endpoint 
	I0526 21:25:17.451686  527485 command_runner.go:124] ! I0526 21:23:53.247379       1 shared_informer.go:247] Caches are synced for certificate-csrapproving 
	I0526 21:25:17.451702  527485 command_runner.go:124] ! I0526 21:23:53.251558       1 shared_informer.go:247] Caches are synced for PVC protection 
	I0526 21:25:17.451717  527485 command_runner.go:124] ! I0526 21:23:53.252699       1 shared_informer.go:247] Caches are synced for ReplicaSet 
	I0526 21:25:17.451732  527485 command_runner.go:124] ! I0526 21:23:53.256544       1 shared_informer.go:247] Caches are synced for TTL 
	I0526 21:25:17.451750  527485 command_runner.go:124] ! I0526 21:23:53.265652       1 range_allocator.go:373] Set node multinode-20210526212238-510955 PodCIDR to [10.244.0.0/24]
	I0526 21:25:17.451765  527485 command_runner.go:124] ! I0526 21:23:53.268627       1 shared_informer.go:247] Caches are synced for job 
	I0526 21:25:17.451780  527485 command_runner.go:124] ! I0526 21:23:53.268752       1 shared_informer.go:247] Caches are synced for stateful set 
	I0526 21:25:17.451795  527485 command_runner.go:124] ! I0526 21:23:53.290037       1 shared_informer.go:247] Caches are synced for crt configmap 
	I0526 21:25:17.451811  527485 command_runner.go:124] ! I0526 21:23:53.290226       1 shared_informer.go:247] Caches are synced for endpoint_slice 
	I0526 21:25:17.451826  527485 command_runner.go:124] ! I0526 21:23:53.292847       1 shared_informer.go:247] Caches are synced for deployment 
	I0526 21:25:17.451839  527485 command_runner.go:124] ! I0526 21:23:53.293728       1 shared_informer.go:247] Caches are synced for GC 
	I0526 21:25:17.451854  527485 command_runner.go:124] ! I0526 21:23:53.293879       1 shared_informer.go:247] Caches are synced for HPA 
	I0526 21:25:17.451871  527485 command_runner.go:124] ! I0526 21:23:53.293974       1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator 
	I0526 21:25:17.451886  527485 command_runner.go:124] ! I0526 21:23:53.317816       1 shared_informer.go:247] Caches are synced for taint 
	I0526 21:25:17.451900  527485 command_runner.go:124] ! I0526 21:23:53.317927       1 node_lifecycle_controller.go:1429] Initializing eviction metric for zone: 
	I0526 21:25:17.451913  527485 command_runner.go:124] ! W0526 21:23:53.318278       1 node_lifecycle_controller.go:1044] Missing timestamp for Node multinode-20210526212238-510955. Assuming now as a timestamp.
	I0526 21:25:17.451926  527485 command_runner.go:124] ! I0526 21:23:53.318396       1 node_lifecycle_controller.go:1195] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
	I0526 21:25:17.451934  527485 command_runner.go:124] ! I0526 21:23:53.318775       1 taint_manager.go:187] Starting NoExecuteTaintManager
	I0526 21:25:17.451954  527485 command_runner.go:124] ! I0526 21:23:53.319750       1 event.go:291] "Event occurred" object="multinode-20210526212238-510955" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-20210526212238-510955 event: Registered Node multinode-20210526212238-510955 in Controller"
	I0526 21:25:17.451968  527485 command_runner.go:124] ! I0526 21:23:53.337883       1 shared_informer.go:247] Caches are synced for disruption 
	I0526 21:25:17.451978  527485 command_runner.go:124] ! I0526 21:23:53.337896       1 disruption.go:339] Sending events to api server.
	I0526 21:25:17.451986  527485 command_runner.go:124] ! I0526 21:23:53.368948       1 shared_informer.go:247] Caches are synced for ReplicationController 
	I0526 21:25:17.452001  527485 command_runner.go:124] ! I0526 21:23:53.431193       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-2wgbs"
	I0526 21:25:17.452018  527485 command_runner.go:124] ! I0526 21:23:53.431223       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-74ff55c5b to 2"
	I0526 21:25:17.452029  527485 command_runner.go:124] ! I0526 21:23:53.459736       1 shared_informer.go:247] Caches are synced for expand 
	I0526 21:25:17.452037  527485 command_runner.go:124] ! I0526 21:23:53.479631       1 shared_informer.go:247] Caches are synced for resource quota 
	I0526 21:25:17.452047  527485 command_runner.go:124] ! I0526 21:23:53.487838       1 shared_informer.go:247] Caches are synced for PV protection 
	I0526 21:25:17.452056  527485 command_runner.go:124] ! I0526 21:23:53.489356       1 shared_informer.go:247] Caches are synced for attach detach 
	I0526 21:25:17.452063  527485 command_runner.go:124] ! I0526 21:23:53.494672       1 shared_informer.go:247] Caches are synced for resource quota 
	I0526 21:25:17.452073  527485 command_runner.go:124] ! I0526 21:23:53.539359       1 shared_informer.go:247] Caches are synced for persistent volume 
	I0526 21:25:17.452087  527485 command_runner.go:124] ! I0526 21:23:53.545401       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-qbl42"
	I0526 21:25:17.452103  527485 command_runner.go:124] ! I0526 21:23:53.545422       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-z56bv"
	I0526 21:25:17.452119  527485 command_runner.go:124] ! I0526 21:23:53.556102       1 event.go:291] "Event occurred" object="kube-system/kube-apiserver-multinode-20210526212238-510955" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0526 21:25:17.452146  527485 command_runner.go:124] ! I0526 21:23:53.567036       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-tw67b"
	I0526 21:25:17.452166  527485 command_runner.go:124] ! E0526 21:23:53.635384       1 clusterroleaggregation_controller.go:181] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
	I0526 21:25:17.452179  527485 command_runner.go:124] ! I0526 21:23:53.689947       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	I0526 21:25:17.452195  527485 command_runner.go:124] ! I0526 21:23:53.733785       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-74ff55c5b to 1"
	I0526 21:25:17.452211  527485 command_runner.go:124] ! I0526 21:23:53.758013       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-74ff55c5b-z56bv"
	I0526 21:25:17.452224  527485 command_runner.go:124] ! I0526 21:23:53.906201       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0526 21:25:17.452233  527485 command_runner.go:124] ! I0526 21:23:53.937294       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0526 21:25:17.452244  527485 command_runner.go:124] ! I0526 21:23:53.937309       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0526 21:25:17.452256  527485 command_runner.go:124] ! I0526 21:24:08.320331       1 node_lifecycle_controller.go:1222] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I0526 21:25:19.962314  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods
	I0526 21:25:19.962338  527485 round_trippers.go:429] Request Headers:
	I0526 21:25:19.962343  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:25:19.962347  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:25:19.966519  527485 round_trippers.go:448] Response Status: 200 OK in 4 milliseconds
	I0526 21:25:19.966545  527485 round_trippers.go:451] Response Headers:
	I0526 21:25:19.966552  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:25:19.966558  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:25:19.966564  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:25:19.966569  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:25:19.966574  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:25:19 GMT
	I0526 21:25:19.968943  527485 request.go:1107] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"554"},"items":[{"metadata":{"name":"coredns-74ff55c5b-tw67b","generateName":"coredns-74ff55c5b-","namespace":"kube-system","uid":"a0522c32-9960-4c21-8a5a-d0b137009166","resourceVersion":"500","creationTimestamp":"2021-05-26T21:23:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"74ff55c5b"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-74ff55c5b","uid":"605a8716-bb95-4dc9-a518-ec6d2a3d080e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"605a8716-bb95-4dc9-a518-ec6d2a3d080e\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},
"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"n [truncated 52658 chars]
	I0526 21:25:19.970240  527485 system_pods.go:59] 8 kube-system pods found
	I0526 21:25:19.970288  527485 system_pods.go:61] "coredns-74ff55c5b-tw67b" [a0522c32-9960-4c21-8a5a-d0b137009166] Running
	I0526 21:25:19.970303  527485 system_pods.go:61] "etcd-multinode-20210526212238-510955" [6e073b61-d86c-4e7a-a1ad-aa5daefd710b] Running
	I0526 21:25:19.970308  527485 system_pods.go:61] "kindnet-2wgbs" [aac3ff91-8f9c-4f4e-81fc-a859f780d67d] Running
	I0526 21:25:19.970312  527485 system_pods.go:61] "kube-apiserver-multinode-20210526212238-510955" [5d446255-3487-4319-9b9f-2294a93fd226] Running
	I0526 21:25:19.970316  527485 system_pods.go:61] "kube-controller-manager-multinode-20210526212238-510955" [ff663293-6f11-48e7-9409-1637114dc587] Running
	I0526 21:25:19.970321  527485 system_pods.go:61] "kube-proxy-qbl42" [950a915d-c5f0-4e6f-bc12-ee97013032f0] Running
	I0526 21:25:19.970325  527485 system_pods.go:61] "kube-scheduler-multinode-20210526212238-510955" [66bb91fe-7af2-400f-a477-fe2dc3428e83] Running
	I0526 21:25:19.970330  527485 system_pods.go:61] "storage-provisioner" [e29f2487-1ac5-4bb4-9ab8-6b5d5c43ec36] Running
	I0526 21:25:19.970335  527485 system_pods.go:74] duration metric: took 3.182240535s to wait for pod list to return data ...
	I0526 21:25:19.970345  527485 default_sa.go:34] waiting for default service account to be created ...
	I0526 21:25:19.970396  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/default/serviceaccounts
	I0526 21:25:19.970404  527485 round_trippers.go:429] Request Headers:
	I0526 21:25:19.970408  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:25:19.970412  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:25:19.973097  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:25:19.973117  527485 round_trippers.go:451] Response Headers:
	I0526 21:25:19.973124  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:25:19.973129  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:25:19.973134  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:25:19.973140  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:25:19.973145  527485 round_trippers.go:454]     Content-Length: 304
	I0526 21:25:19.973158  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:25:19 GMT
	I0526 21:25:19.973400  527485 request.go:1107] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"554"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"7ed7b6cf-a0e1-4add-9aa6-5087c856497d","resourceVersion":"434","creationTimestamp":"2021-05-26T21:23:53Z"},"secrets":[{"name":"default-token-cdspv"}]}]}
	I0526 21:25:19.974116  527485 default_sa.go:45] found service account: "default"
	I0526 21:25:19.974136  527485 default_sa.go:55] duration metric: took 3.786239ms for default service account to be created ...
	I0526 21:25:19.974143  527485 system_pods.go:116] waiting for k8s-apps to be running ...
	I0526 21:25:19.974182  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods
	I0526 21:25:19.974190  527485 round_trippers.go:429] Request Headers:
	I0526 21:25:19.974194  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:25:19.974198  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:25:19.981719  527485 round_trippers.go:448] Response Status: 200 OK in 7 milliseconds
	I0526 21:25:19.981737  527485 round_trippers.go:451] Response Headers:
	I0526 21:25:19.981743  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:25:19.981748  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:25:19.981754  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:25:19 GMT
	I0526 21:25:19.981759  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:25:19.981776  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:25:19.982672  527485 request.go:1107] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"554"},"items":[{"metadata":{"name":"coredns-74ff55c5b-tw67b","generateName":"coredns-74ff55c5b-","namespace":"kube-system","uid":"a0522c32-9960-4c21-8a5a-d0b137009166","resourceVersion":"500","creationTimestamp":"2021-05-26T21:23:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"74ff55c5b"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-74ff55c5b","uid":"605a8716-bb95-4dc9-a518-ec6d2a3d080e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"605a8716-bb95-4dc9-a518-ec6d2a3d080e\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},
"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"n [truncated 52658 chars]
	I0526 21:25:19.983890  527485 system_pods.go:86] 8 kube-system pods found
	I0526 21:25:19.983909  527485 system_pods.go:89] "coredns-74ff55c5b-tw67b" [a0522c32-9960-4c21-8a5a-d0b137009166] Running
	I0526 21:25:19.983916  527485 system_pods.go:89] "etcd-multinode-20210526212238-510955" [6e073b61-d86c-4e7a-a1ad-aa5daefd710b] Running
	I0526 21:25:19.983925  527485 system_pods.go:89] "kindnet-2wgbs" [aac3ff91-8f9c-4f4e-81fc-a859f780d67d] Running
	I0526 21:25:19.983935  527485 system_pods.go:89] "kube-apiserver-multinode-20210526212238-510955" [5d446255-3487-4319-9b9f-2294a93fd226] Running
	I0526 21:25:19.983945  527485 system_pods.go:89] "kube-controller-manager-multinode-20210526212238-510955" [ff663293-6f11-48e7-9409-1637114dc587] Running
	I0526 21:25:19.983953  527485 system_pods.go:89] "kube-proxy-qbl42" [950a915d-c5f0-4e6f-bc12-ee97013032f0] Running
	I0526 21:25:19.983960  527485 system_pods.go:89] "kube-scheduler-multinode-20210526212238-510955" [66bb91fe-7af2-400f-a477-fe2dc3428e83] Running
	I0526 21:25:19.983964  527485 system_pods.go:89] "storage-provisioner" [e29f2487-1ac5-4bb4-9ab8-6b5d5c43ec36] Running
	I0526 21:25:19.983969  527485 system_pods.go:126] duration metric: took 9.821847ms to wait for k8s-apps to be running ...
	I0526 21:25:19.983979  527485 system_svc.go:44] waiting for kubelet service to be running ....
	I0526 21:25:19.984027  527485 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0526 21:25:19.994398  527485 system_svc.go:56] duration metric: took 10.413838ms WaitForService to wait for kubelet.
	I0526 21:25:19.994415  527485 kubeadm.go:547] duration metric: took 1m25.186288945s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0526 21:25:19.994431  527485 node_conditions.go:102] verifying NodePressure condition ...
	I0526 21:25:19.994489  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes
	I0526 21:25:19.994498  527485 round_trippers.go:429] Request Headers:
	I0526 21:25:19.994502  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:25:19.994506  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:25:20.000092  527485 round_trippers.go:448] Response Status: 200 OK in 5 milliseconds
	I0526 21:25:20.000105  527485 round_trippers.go:451] Response Headers:
	I0526 21:25:20.000110  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:25:20.000114  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:25:20.000117  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:25:20.000121  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:25:20.000125  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:25:20 GMT
	I0526 21:25:20.000269  527485 request.go:1107] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"554"},"items":[{"metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager
":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T2 [truncated 6155 chars]
	I0526 21:25:20.001209  527485 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0526 21:25:20.001246  527485 node_conditions.go:123] node cpu capacity is 2
	I0526 21:25:20.001261  527485 node_conditions.go:105] duration metric: took 6.822942ms to run NodePressure ...
	I0526 21:25:20.001275  527485 start.go:214] waiting for startup goroutines ...
	I0526 21:25:20.003439  527485 out.go:170] 
	I0526 21:25:20.003724  527485 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/multinode-20210526212238-510955/config.json ...
	I0526 21:25:20.005541  527485 out.go:170] * Starting node multinode-20210526212238-510955-m02 in cluster multinode-20210526212238-510955
	I0526 21:25:20.005562  527485 preload.go:98] Checking if preload exists for k8s version v1.20.2 and runtime containerd
	I0526 21:25:20.005598  527485 preload.go:106] Found local preload: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.20.2-containerd-overlay2-amd64.tar.lz4
	I0526 21:25:20.005611  527485 cache.go:54] Caching tarball of preloaded images
	I0526 21:25:20.005736  527485 preload.go:143] Found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.20.2-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0526 21:25:20.005755  527485 cache.go:57] Finished verifying existence of preloaded tar for  v1.20.2 on containerd
	I0526 21:25:20.005852  527485 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/multinode-20210526212238-510955/config.json ...
	I0526 21:25:20.006024  527485 cache.go:191] Successfully downloaded all kic artifacts
	I0526 21:25:20.006050  527485 start.go:313] acquiring machines lock for multinode-20210526212238-510955-m02: {Name:mk9b6c43d31e9eaa4b66476ed1274ba5b188c66b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0526 21:25:20.006128  527485 start.go:317] acquired machines lock for "multinode-20210526212238-510955-m02" in 61.64µs
	I0526 21:25:20.006171  527485 start.go:89] Provisioning new machine with config: &{Name:multinode-20210526212238-510955 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.20.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.22-1620785771-11384@sha256:f5844fe35994179bbad8dda27d4912304a2fedccdf0bf93ce8b2ec2b3b83af1c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 Cluste
rName:multinode-20210526212238-510955 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.229 Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.20.2 ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:true} &{Name:m02 IP: Port:0
KubernetesVersion:v1.20.2 ControlPlane:false Worker:true}
	I0526 21:25:20.006258  527485 start.go:126] createHost starting for "m02" (driver="kvm2")
	I0526 21:25:20.008161  527485 out.go:197] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0526 21:25:20.008265  527485 main.go:128] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0526 21:25:20.008309  527485 main.go:128] libmachine: Launching plugin server for driver kvm2
	I0526 21:25:20.019614  527485 main.go:128] libmachine: Plugin server listening at address 127.0.0.1:40867
	I0526 21:25:20.020112  527485 main.go:128] libmachine: () Calling .GetVersion
	I0526 21:25:20.020604  527485 main.go:128] libmachine: Using API Version  1
	I0526 21:25:20.020628  527485 main.go:128] libmachine: () Calling .SetConfigRaw
	I0526 21:25:20.020998  527485 main.go:128] libmachine: () Calling .GetMachineName
	I0526 21:25:20.021172  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Calling .GetMachineName
	I0526 21:25:20.021309  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Calling .DriverName
	I0526 21:25:20.021426  527485 start.go:160] libmachine.API.Create for "multinode-20210526212238-510955" (driver="kvm2")
	I0526 21:25:20.021451  527485 client.go:168] LocalClient.Create starting
	I0526 21:25:20.021484  527485 main.go:128] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/ca.pem
	I0526 21:25:20.021523  527485 main.go:128] libmachine: Decoding PEM data...
	I0526 21:25:20.021551  527485 main.go:128] libmachine: Parsing certificate...
	I0526 21:25:20.021666  527485 main.go:128] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/cert.pem
	I0526 21:25:20.021690  527485 main.go:128] libmachine: Decoding PEM data...
	I0526 21:25:20.021706  527485 main.go:128] libmachine: Parsing certificate...
	I0526 21:25:20.021765  527485 main.go:128] libmachine: Running pre-create checks...
	I0526 21:25:20.021778  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Calling .PreCreateCheck
	I0526 21:25:20.021910  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Calling .GetConfigRaw
	I0526 21:25:20.022245  527485 main.go:128] libmachine: Creating machine...
	I0526 21:25:20.022263  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Calling .Create
	I0526 21:25:20.022370  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Creating KVM machine...
	I0526 21:25:20.024804  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | found existing default KVM network
	I0526 21:25:20.024985  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | found existing private KVM network mk-multinode-20210526212238-510955
	I0526 21:25:20.025117  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Setting up store path in /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/multinode-20210526212238-510955-m02 ...
	I0526 21:25:20.025143  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Building disk image from file:///home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/iso/minikube-v1.20.0.iso
	I0526 21:25:20.025208  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | I0526 21:25:20.025100  527782 common.go:101] Making disk image using store path: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube
	I0526 21:25:20.025261  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Downloading /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/iso/minikube-v1.20.0.iso...
	I0526 21:25:20.210752  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | I0526 21:25:20.210598  527782 common.go:108] Creating ssh key: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/multinode-20210526212238-510955-m02/id_rsa...
	I0526 21:25:20.455411  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | I0526 21:25:20.455294  527782 common.go:114] Creating raw disk image: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/multinode-20210526212238-510955-m02/multinode-20210526212238-510955-m02.rawdisk...
	I0526 21:25:20.455451  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | Writing magic tar header
	I0526 21:25:20.455472  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | Writing SSH key tar header
	I0526 21:25:20.455493  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | I0526 21:25:20.455432  527782 common.go:128] Fixing permissions on /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/multinode-20210526212238-510955-m02 ...
	I0526 21:25:20.455629  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/multinode-20210526212238-510955-m02
	I0526 21:25:20.455667  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines
	I0526 21:25:20.455690  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/multinode-20210526212238-510955-m02 (perms=drwx------)
	I0526 21:25:20.455710  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube
	I0526 21:25:20.455734  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1
	I0526 21:25:20.455749  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0526 21:25:20.455768  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines (perms=drwxr-xr-x)
	I0526 21:25:20.455808  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube (perms=drwxr-xr-x)
	I0526 21:25:20.455828  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1 (perms=drwxr-xr-x)
	I0526 21:25:20.455839  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | Checking permissions on dir: /home/jenkins
	I0526 21:25:20.455858  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | Checking permissions on dir: /home
	I0526 21:25:20.455867  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | Skipping /home - not owner
	I0526 21:25:20.455880  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxr-xr-x)
	I0526 21:25:20.455895  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0526 21:25:20.455904  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Creating domain...
	I0526 21:25:20.482460  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | domain multinode-20210526212238-510955-m02 has defined MAC address 52:54:00:97:3e:6b in network default
	I0526 21:25:20.482620  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Ensuring networks are active...
	I0526 21:25:20.482652  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | domain multinode-20210526212238-510955-m02 has defined MAC address 52:54:00:9f:f1:a0 in network mk-multinode-20210526212238-510955
	I0526 21:25:20.484777  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Ensuring network default is active
	I0526 21:25:20.485081  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Ensuring network mk-multinode-20210526212238-510955 is active
	I0526 21:25:20.485392  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Getting domain xml...
	I0526 21:25:20.487191  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Creating domain...
	I0526 21:25:20.846400  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Waiting to get IP...
	I0526 21:25:20.847229  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | domain multinode-20210526212238-510955-m02 has defined MAC address 52:54:00:9f:f1:a0 in network mk-multinode-20210526212238-510955
	I0526 21:25:20.847637  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | unable to find current IP address of domain multinode-20210526212238-510955-m02 in network mk-multinode-20210526212238-510955
	I0526 21:25:20.847666  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | I0526 21:25:20.847604  527782 retry.go:31] will retry after 263.082536ms: waiting for machine to come up
	I0526 21:25:21.111830  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | domain multinode-20210526212238-510955-m02 has defined MAC address 52:54:00:9f:f1:a0 in network mk-multinode-20210526212238-510955
	I0526 21:25:21.112397  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | unable to find current IP address of domain multinode-20210526212238-510955-m02 in network mk-multinode-20210526212238-510955
	I0526 21:25:21.112428  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | I0526 21:25:21.112336  527782 retry.go:31] will retry after 381.329545ms: waiting for machine to come up
	I0526 21:25:21.494744  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | domain multinode-20210526212238-510955-m02 has defined MAC address 52:54:00:9f:f1:a0 in network mk-multinode-20210526212238-510955
	I0526 21:25:21.495238  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | unable to find current IP address of domain multinode-20210526212238-510955-m02 in network mk-multinode-20210526212238-510955
	I0526 21:25:21.495268  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | I0526 21:25:21.495174  527782 retry.go:31] will retry after 422.765636ms: waiting for machine to come up
	I0526 21:25:21.919597  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | domain multinode-20210526212238-510955-m02 has defined MAC address 52:54:00:9f:f1:a0 in network mk-multinode-20210526212238-510955
	I0526 21:25:21.919976  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | unable to find current IP address of domain multinode-20210526212238-510955-m02 in network mk-multinode-20210526212238-510955
	I0526 21:25:21.920010  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | I0526 21:25:21.919924  527782 retry.go:31] will retry after 473.074753ms: waiting for machine to come up
	I0526 21:25:22.394448  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | domain multinode-20210526212238-510955-m02 has defined MAC address 52:54:00:9f:f1:a0 in network mk-multinode-20210526212238-510955
	I0526 21:25:22.394860  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | unable to find current IP address of domain multinode-20210526212238-510955-m02 in network mk-multinode-20210526212238-510955
	I0526 21:25:22.394893  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | I0526 21:25:22.394806  527782 retry.go:31] will retry after 587.352751ms: waiting for machine to come up
	I0526 21:25:22.983159  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | domain multinode-20210526212238-510955-m02 has defined MAC address 52:54:00:9f:f1:a0 in network mk-multinode-20210526212238-510955
	I0526 21:25:22.983516  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | unable to find current IP address of domain multinode-20210526212238-510955-m02 in network mk-multinode-20210526212238-510955
	I0526 21:25:22.983545  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | I0526 21:25:22.983483  527782 retry.go:31] will retry after 834.206799ms: waiting for machine to come up
	I0526 21:25:23.819521  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | domain multinode-20210526212238-510955-m02 has defined MAC address 52:54:00:9f:f1:a0 in network mk-multinode-20210526212238-510955
	I0526 21:25:23.819856  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | unable to find current IP address of domain multinode-20210526212238-510955-m02 in network mk-multinode-20210526212238-510955
	I0526 21:25:23.819880  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | I0526 21:25:23.819817  527782 retry.go:31] will retry after 746.553905ms: waiting for machine to come up
	I0526 21:25:24.567585  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | domain multinode-20210526212238-510955-m02 has defined MAC address 52:54:00:9f:f1:a0 in network mk-multinode-20210526212238-510955
	I0526 21:25:24.567942  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | unable to find current IP address of domain multinode-20210526212238-510955-m02 in network mk-multinode-20210526212238-510955
	I0526 21:25:24.567967  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | I0526 21:25:24.567900  527782 retry.go:31] will retry after 987.362415ms: waiting for machine to come up
	I0526 21:25:25.557029  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | domain multinode-20210526212238-510955-m02 has defined MAC address 52:54:00:9f:f1:a0 in network mk-multinode-20210526212238-510955
	I0526 21:25:25.557334  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | unable to find current IP address of domain multinode-20210526212238-510955-m02 in network mk-multinode-20210526212238-510955
	I0526 21:25:25.557362  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | I0526 21:25:25.557289  527782 retry.go:31] will retry after 1.189835008s: waiting for machine to come up
	I0526 21:25:26.748560  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | domain multinode-20210526212238-510955-m02 has defined MAC address 52:54:00:9f:f1:a0 in network mk-multinode-20210526212238-510955
	I0526 21:25:26.748940  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | unable to find current IP address of domain multinode-20210526212238-510955-m02 in network mk-multinode-20210526212238-510955
	I0526 21:25:26.748974  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | I0526 21:25:26.748888  527782 retry.go:31] will retry after 1.677229867s: waiting for machine to come up
	I0526 21:25:28.428556  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | domain multinode-20210526212238-510955-m02 has defined MAC address 52:54:00:9f:f1:a0 in network mk-multinode-20210526212238-510955
	I0526 21:25:28.428929  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | unable to find current IP address of domain multinode-20210526212238-510955-m02 in network mk-multinode-20210526212238-510955
	I0526 21:25:28.428959  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | I0526 21:25:28.428848  527782 retry.go:31] will retry after 2.346016261s: waiting for machine to come up
	I0526 21:25:30.776577  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | domain multinode-20210526212238-510955-m02 has defined MAC address 52:54:00:9f:f1:a0 in network mk-multinode-20210526212238-510955
	I0526 21:25:30.777005  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | unable to find current IP address of domain multinode-20210526212238-510955-m02 in network mk-multinode-20210526212238-510955
	I0526 21:25:30.777037  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | I0526 21:25:30.776944  527782 retry.go:31] will retry after 3.36678925s: waiting for machine to come up
	I0526 21:25:34.145475  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | domain multinode-20210526212238-510955-m02 has defined MAC address 52:54:00:9f:f1:a0 in network mk-multinode-20210526212238-510955
	I0526 21:25:34.145873  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | unable to find current IP address of domain multinode-20210526212238-510955-m02 in network mk-multinode-20210526212238-510955
	I0526 21:25:34.145899  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | I0526 21:25:34.145842  527782 retry.go:31] will retry after 3.11822781s: waiting for machine to come up
	I0526 21:25:37.267146  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | domain multinode-20210526212238-510955-m02 has defined MAC address 52:54:00:9f:f1:a0 in network mk-multinode-20210526212238-510955
	I0526 21:25:37.267618  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Found IP for machine: 192.168.39.87
	I0526 21:25:37.267643  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Reserving static IP address...
	I0526 21:25:37.267663  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | domain multinode-20210526212238-510955-m02 has current primary IP address 192.168.39.87 and MAC address 52:54:00:9f:f1:a0 in network mk-multinode-20210526212238-510955
	I0526 21:25:37.267985  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | unable to find host DHCP lease matching {name: "multinode-20210526212238-510955-m02", mac: "52:54:00:9f:f1:a0", ip: "192.168.39.87"} in network mk-multinode-20210526212238-510955
	I0526 21:25:37.318746  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | Getting to WaitForSSH function...
	I0526 21:25:37.318801  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Reserved static IP address: 192.168.39.87
	I0526 21:25:37.318818  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Waiting for SSH to be available...
	I0526 21:25:37.324082  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | domain multinode-20210526212238-510955-m02 has defined MAC address 52:54:00:9f:f1:a0 in network mk-multinode-20210526212238-510955
	I0526 21:25:37.324481  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:f1:a0", ip: ""} in network mk-multinode-20210526212238-510955: {Iface:virbr1 ExpiryTime:2021-05-26 22:25:34 +0000 UTC Type:0 Mac:52:54:00:9f:f1:a0 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:minikube Clientid:01:52:54:00:9f:f1:a0}
	I0526 21:25:37.324518  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | domain multinode-20210526212238-510955-m02 has defined IP address 192.168.39.87 and MAC address 52:54:00:9f:f1:a0 in network mk-multinode-20210526212238-510955
	I0526 21:25:37.324641  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | Using SSH client type: external
	I0526 21:25:37.324674  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/multinode-20210526212238-510955-m02/id_rsa (-rw-------)
	I0526 21:25:37.324716  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.87 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/multinode-20210526212238-510955-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0526 21:25:37.324732  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | About to run SSH command:
	I0526 21:25:37.324745  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | exit 0
	I0526 21:25:37.460600  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | SSH cmd err, output: <nil>: 
	I0526 21:25:37.461021  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) KVM machine creation complete!
	I0526 21:25:37.461113  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Calling .GetConfigRaw
	I0526 21:25:37.461703  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Calling .DriverName
	I0526 21:25:37.461920  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Calling .DriverName
	I0526 21:25:37.462073  527485 main.go:128] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0526 21:25:37.462096  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Calling .GetState
	I0526 21:25:37.464483  527485 main.go:128] libmachine: Detecting operating system of created instance...
	I0526 21:25:37.464498  527485 main.go:128] libmachine: Waiting for SSH to be available...
	I0526 21:25:37.464505  527485 main.go:128] libmachine: Getting to WaitForSSH function...
	I0526 21:25:37.464512  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Calling .GetSSHHostname
	I0526 21:25:37.468822  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | domain multinode-20210526212238-510955-m02 has defined MAC address 52:54:00:9f:f1:a0 in network mk-multinode-20210526212238-510955
	I0526 21:25:37.469158  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:f1:a0", ip: ""} in network mk-multinode-20210526212238-510955: {Iface:virbr1 ExpiryTime:2021-05-26 22:25:34 +0000 UTC Type:0 Mac:52:54:00:9f:f1:a0 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:multinode-20210526212238-510955-m02 Clientid:01:52:54:00:9f:f1:a0}
	I0526 21:25:37.469191  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | domain multinode-20210526212238-510955-m02 has defined IP address 192.168.39.87 and MAC address 52:54:00:9f:f1:a0 in network mk-multinode-20210526212238-510955
	I0526 21:25:37.469270  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Calling .GetSSHPort
	I0526 21:25:37.469439  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Calling .GetSSHKeyPath
	I0526 21:25:37.469592  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Calling .GetSSHKeyPath
	I0526 21:25:37.469696  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Calling .GetSSHUsername
	I0526 21:25:37.469829  527485 main.go:128] libmachine: Using SSH client type: native
	I0526 21:25:37.470066  527485 main.go:128] libmachine: &{{{<nil> 0 [] [] []} docker [0x802c00] 0x802bc0 <nil>  [] 0s} 192.168.39.87 22 <nil> <nil>}
	I0526 21:25:37.470086  527485 main.go:128] libmachine: About to run SSH command:
	exit 0
	I0526 21:25:37.591991  527485 main.go:128] libmachine: SSH cmd err, output: <nil>: 
	I0526 21:25:37.592015  527485 main.go:128] libmachine: Detecting the provisioner...
	I0526 21:25:37.592026  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Calling .GetSSHHostname
	I0526 21:25:37.596752  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | domain multinode-20210526212238-510955-m02 has defined MAC address 52:54:00:9f:f1:a0 in network mk-multinode-20210526212238-510955
	I0526 21:25:37.597079  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:f1:a0", ip: ""} in network mk-multinode-20210526212238-510955: {Iface:virbr1 ExpiryTime:2021-05-26 22:25:34 +0000 UTC Type:0 Mac:52:54:00:9f:f1:a0 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:multinode-20210526212238-510955-m02 Clientid:01:52:54:00:9f:f1:a0}
	I0526 21:25:37.597107  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | domain multinode-20210526212238-510955-m02 has defined IP address 192.168.39.87 and MAC address 52:54:00:9f:f1:a0 in network mk-multinode-20210526212238-510955
	I0526 21:25:37.597212  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Calling .GetSSHPort
	I0526 21:25:37.597391  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Calling .GetSSHKeyPath
	I0526 21:25:37.597550  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Calling .GetSSHKeyPath
	I0526 21:25:37.597690  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Calling .GetSSHUsername
	I0526 21:25:37.597835  527485 main.go:128] libmachine: Using SSH client type: native
	I0526 21:25:37.597964  527485 main.go:128] libmachine: &{{{<nil> 0 [] [] []} docker [0x802c00] 0x802bc0 <nil>  [] 0s} 192.168.39.87 22 <nil> <nil>}
	I0526 21:25:37.597976  527485 main.go:128] libmachine: About to run SSH command:
	cat /etc/os-release
	I0526 21:25:37.722027  527485 main.go:128] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2020.02.12
	ID=buildroot
	VERSION_ID=2020.02.12
	PRETTY_NAME="Buildroot 2020.02.12"
	
	I0526 21:25:37.722148  527485 main.go:128] libmachine: found compatible host: buildroot
	I0526 21:25:37.722162  527485 main.go:128] libmachine: Provisioning with buildroot...
	I0526 21:25:37.722176  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Calling .GetMachineName
	I0526 21:25:37.722435  527485 buildroot.go:166] provisioning hostname "multinode-20210526212238-510955-m02"
	I0526 21:25:37.722468  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Calling .GetMachineName
	I0526 21:25:37.722618  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Calling .GetSSHHostname
	I0526 21:25:37.728325  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | domain multinode-20210526212238-510955-m02 has defined MAC address 52:54:00:9f:f1:a0 in network mk-multinode-20210526212238-510955
	I0526 21:25:37.728682  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:f1:a0", ip: ""} in network mk-multinode-20210526212238-510955: {Iface:virbr1 ExpiryTime:2021-05-26 22:25:34 +0000 UTC Type:0 Mac:52:54:00:9f:f1:a0 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:multinode-20210526212238-510955-m02 Clientid:01:52:54:00:9f:f1:a0}
	I0526 21:25:37.728708  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | domain multinode-20210526212238-510955-m02 has defined IP address 192.168.39.87 and MAC address 52:54:00:9f:f1:a0 in network mk-multinode-20210526212238-510955
	I0526 21:25:37.728918  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Calling .GetSSHPort
	I0526 21:25:37.729093  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Calling .GetSSHKeyPath
	I0526 21:25:37.729268  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Calling .GetSSHKeyPath
	I0526 21:25:37.729423  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Calling .GetSSHUsername
	I0526 21:25:37.729569  527485 main.go:128] libmachine: Using SSH client type: native
	I0526 21:25:37.729707  527485 main.go:128] libmachine: &{{{<nil> 0 [] [] []} docker [0x802c00] 0x802bc0 <nil>  [] 0s} 192.168.39.87 22 <nil> <nil>}
	I0526 21:25:37.729738  527485 main.go:128] libmachine: About to run SSH command:
	sudo hostname multinode-20210526212238-510955-m02 && echo "multinode-20210526212238-510955-m02" | sudo tee /etc/hostname
	I0526 21:25:37.861077  527485 main.go:128] libmachine: SSH cmd err, output: <nil>: multinode-20210526212238-510955-m02
	
	I0526 21:25:37.861117  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Calling .GetSSHHostname
	I0526 21:25:37.866154  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | domain multinode-20210526212238-510955-m02 has defined MAC address 52:54:00:9f:f1:a0 in network mk-multinode-20210526212238-510955
	I0526 21:25:37.866468  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:f1:a0", ip: ""} in network mk-multinode-20210526212238-510955: {Iface:virbr1 ExpiryTime:2021-05-26 22:25:34 +0000 UTC Type:0 Mac:52:54:00:9f:f1:a0 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:multinode-20210526212238-510955-m02 Clientid:01:52:54:00:9f:f1:a0}
	I0526 21:25:37.866503  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | domain multinode-20210526212238-510955-m02 has defined IP address 192.168.39.87 and MAC address 52:54:00:9f:f1:a0 in network mk-multinode-20210526212238-510955
	I0526 21:25:37.866603  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Calling .GetSSHPort
	I0526 21:25:37.866801  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Calling .GetSSHKeyPath
	I0526 21:25:37.866961  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Calling .GetSSHKeyPath
	I0526 21:25:37.867121  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Calling .GetSSHUsername
	I0526 21:25:37.867324  527485 main.go:128] libmachine: Using SSH client type: native
	I0526 21:25:37.867465  527485 main.go:128] libmachine: &{{{<nil> 0 [] [] []} docker [0x802c00] 0x802bc0 <nil>  [] 0s} 192.168.39.87 22 <nil> <nil>}
	I0526 21:25:37.867488  527485 main.go:128] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-20210526212238-510955-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-20210526212238-510955-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-20210526212238-510955-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0526 21:25:37.994577  527485 main.go:128] libmachine: SSH cmd err, output: <nil>: 
	I0526 21:25:37.994605  527485 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikub
e/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube}
	I0526 21:25:37.994632  527485 buildroot.go:174] setting up certificates
	I0526 21:25:37.994644  527485 provision.go:83] configureAuth start
	I0526 21:25:37.994657  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Calling .GetMachineName
	I0526 21:25:37.994877  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Calling .GetIP
	I0526 21:25:37.999693  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | domain multinode-20210526212238-510955-m02 has defined MAC address 52:54:00:9f:f1:a0 in network mk-multinode-20210526212238-510955
	I0526 21:25:37.999978  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:f1:a0", ip: ""} in network mk-multinode-20210526212238-510955: {Iface:virbr1 ExpiryTime:2021-05-26 22:25:34 +0000 UTC Type:0 Mac:52:54:00:9f:f1:a0 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:multinode-20210526212238-510955-m02 Clientid:01:52:54:00:9f:f1:a0}
	I0526 21:25:38.000004  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | domain multinode-20210526212238-510955-m02 has defined IP address 192.168.39.87 and MAC address 52:54:00:9f:f1:a0 in network mk-multinode-20210526212238-510955
	I0526 21:25:38.000166  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Calling .GetSSHHostname
	I0526 21:25:38.004413  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | domain multinode-20210526212238-510955-m02 has defined MAC address 52:54:00:9f:f1:a0 in network mk-multinode-20210526212238-510955
	I0526 21:25:38.004690  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:f1:a0", ip: ""} in network mk-multinode-20210526212238-510955: {Iface:virbr1 ExpiryTime:2021-05-26 22:25:34 +0000 UTC Type:0 Mac:52:54:00:9f:f1:a0 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:multinode-20210526212238-510955-m02 Clientid:01:52:54:00:9f:f1:a0}
	I0526 21:25:38.004721  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | domain multinode-20210526212238-510955-m02 has defined IP address 192.168.39.87 and MAC address 52:54:00:9f:f1:a0 in network mk-multinode-20210526212238-510955
	I0526 21:25:38.004820  527485 provision.go:137] copyHostCerts
	I0526 21:25:38.004856  527485 vm_assets.go:98] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/ca.pem
	I0526 21:25:38.004949  527485 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/ca.pem, removing ...
	I0526 21:25:38.004962  527485 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/ca.pem
	I0526 21:25:38.005019  527485 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/ca.pem (1078 bytes)
	I0526 21:25:38.005112  527485 vm_assets.go:98] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cert.pem
	I0526 21:25:38.005137  527485 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cert.pem, removing ...
	I0526 21:25:38.005142  527485 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cert.pem
	I0526 21:25:38.005165  527485 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cert.pem (1123 bytes)
	I0526 21:25:38.005210  527485 vm_assets.go:98] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/key.pem
	I0526 21:25:38.005231  527485 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/key.pem, removing ...
	I0526 21:25:38.005239  527485 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/key.pem
	I0526 21:25:38.005262  527485 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/key.pem (1679 bytes)
	I0526 21:25:38.005306  527485 provision.go:111] generating server cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/ca-key.pem org=jenkins.multinode-20210526212238-510955-m02 san=[192.168.39.87 192.168.39.87 localhost 127.0.0.1 minikube multinode-20210526212238-510955-m02]
	I0526 21:25:38.122346  527485 provision.go:171] copyRemoteCerts
	I0526 21:25:38.122396  527485 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0526 21:25:38.122419  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Calling .GetSSHHostname
	I0526 21:25:38.126872  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | domain multinode-20210526212238-510955-m02 has defined MAC address 52:54:00:9f:f1:a0 in network mk-multinode-20210526212238-510955
	I0526 21:25:38.127179  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:f1:a0", ip: ""} in network mk-multinode-20210526212238-510955: {Iface:virbr1 ExpiryTime:2021-05-26 22:25:34 +0000 UTC Type:0 Mac:52:54:00:9f:f1:a0 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:multinode-20210526212238-510955-m02 Clientid:01:52:54:00:9f:f1:a0}
	I0526 21:25:38.127205  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | domain multinode-20210526212238-510955-m02 has defined IP address 192.168.39.87 and MAC address 52:54:00:9f:f1:a0 in network mk-multinode-20210526212238-510955
	I0526 21:25:38.127381  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Calling .GetSSHPort
	I0526 21:25:38.127545  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Calling .GetSSHKeyPath
	I0526 21:25:38.127680  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Calling .GetSSHUsername
	I0526 21:25:38.127805  527485 sshutil.go:53] new ssh client: &{IP:192.168.39.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/multinode-20210526212238-510955-m02/id_rsa Username:docker}
	I0526 21:25:38.216400  527485 vm_assets.go:98] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0526 21:25:38.216440  527485 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0526 21:25:38.233050  527485 vm_assets.go:98] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0526 21:25:38.233087  527485 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/server.pem --> /etc/docker/server.pem (1277 bytes)
	I0526 21:25:38.249070  527485 vm_assets.go:98] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0526 21:25:38.249106  527485 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0526 21:25:38.265161  527485 provision.go:86] duration metric: configureAuth took 270.50695ms
	I0526 21:25:38.265183  527485 buildroot.go:189] setting minikube options for container-runtime
	I0526 21:25:38.265377  527485 main.go:128] libmachine: Checking connection to Docker...
	I0526 21:25:38.265397  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Calling .GetURL
	I0526 21:25:38.267393  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | Using libvirt version 3000000
	I0526 21:25:38.271542  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | domain multinode-20210526212238-510955-m02 has defined MAC address 52:54:00:9f:f1:a0 in network mk-multinode-20210526212238-510955
	I0526 21:25:38.271872  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:f1:a0", ip: ""} in network mk-multinode-20210526212238-510955: {Iface:virbr1 ExpiryTime:2021-05-26 22:25:34 +0000 UTC Type:0 Mac:52:54:00:9f:f1:a0 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:multinode-20210526212238-510955-m02 Clientid:01:52:54:00:9f:f1:a0}
	I0526 21:25:38.271897  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | domain multinode-20210526212238-510955-m02 has defined IP address 192.168.39.87 and MAC address 52:54:00:9f:f1:a0 in network mk-multinode-20210526212238-510955
	I0526 21:25:38.272016  527485 main.go:128] libmachine: Docker is up and running!
	I0526 21:25:38.272031  527485 main.go:128] libmachine: Reticulating splines...
	I0526 21:25:38.272037  527485 client.go:171] LocalClient.Create took 18.250578511s
	I0526 21:25:38.272055  527485 start.go:168] duration metric: libmachine.API.Create for "multinode-20210526212238-510955" took 18.250628879s
	I0526 21:25:38.272067  527485 start.go:267] post-start starting for "multinode-20210526212238-510955-m02" (driver="kvm2")
	I0526 21:25:38.272074  527485 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0526 21:25:38.272089  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Calling .DriverName
	I0526 21:25:38.272263  527485 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0526 21:25:38.272282  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Calling .GetSSHHostname
	I0526 21:25:38.277376  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | domain multinode-20210526212238-510955-m02 has defined MAC address 52:54:00:9f:f1:a0 in network mk-multinode-20210526212238-510955
	I0526 21:25:38.277710  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:f1:a0", ip: ""} in network mk-multinode-20210526212238-510955: {Iface:virbr1 ExpiryTime:2021-05-26 22:25:34 +0000 UTC Type:0 Mac:52:54:00:9f:f1:a0 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:multinode-20210526212238-510955-m02 Clientid:01:52:54:00:9f:f1:a0}
	I0526 21:25:38.277742  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | domain multinode-20210526212238-510955-m02 has defined IP address 192.168.39.87 and MAC address 52:54:00:9f:f1:a0 in network mk-multinode-20210526212238-510955
	I0526 21:25:38.277853  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Calling .GetSSHPort
	I0526 21:25:38.278052  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Calling .GetSSHKeyPath
	I0526 21:25:38.278206  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Calling .GetSSHUsername
	I0526 21:25:38.278364  527485 sshutil.go:53] new ssh client: &{IP:192.168.39.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/multinode-20210526212238-510955-m02/id_rsa Username:docker}
	I0526 21:25:38.365155  527485 ssh_runner.go:149] Run: cat /etc/os-release
	I0526 21:25:38.369609  527485 command_runner.go:124] > NAME=Buildroot
	I0526 21:25:38.369632  527485 command_runner.go:124] > VERSION=2020.02.12
	I0526 21:25:38.369642  527485 command_runner.go:124] > ID=buildroot
	I0526 21:25:38.369651  527485 command_runner.go:124] > VERSION_ID=2020.02.12
	I0526 21:25:38.369659  527485 command_runner.go:124] > PRETTY_NAME="Buildroot 2020.02.12"
	I0526 21:25:38.369698  527485 info.go:137] Remote host: Buildroot 2020.02.12
	I0526 21:25:38.369715  527485 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/addons for local assets ...
	I0526 21:25:38.369772  527485 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/files for local assets ...
	I0526 21:25:38.369890  527485 start.go:270] post-start completed in 97.814032ms
	I0526 21:25:38.369923  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Calling .GetConfigRaw
	I0526 21:25:38.370477  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Calling .GetIP
	I0526 21:25:38.375243  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | domain multinode-20210526212238-510955-m02 has defined MAC address 52:54:00:9f:f1:a0 in network mk-multinode-20210526212238-510955
	I0526 21:25:38.375578  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:f1:a0", ip: ""} in network mk-multinode-20210526212238-510955: {Iface:virbr1 ExpiryTime:2021-05-26 22:25:34 +0000 UTC Type:0 Mac:52:54:00:9f:f1:a0 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:multinode-20210526212238-510955-m02 Clientid:01:52:54:00:9f:f1:a0}
	I0526 21:25:38.375610  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | domain multinode-20210526212238-510955-m02 has defined IP address 192.168.39.87 and MAC address 52:54:00:9f:f1:a0 in network mk-multinode-20210526212238-510955
	I0526 21:25:38.375807  527485 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/multinode-20210526212238-510955/config.json ...
	I0526 21:25:38.375958  527485 start.go:129] duration metric: createHost completed in 18.369690652s
	I0526 21:25:38.375973  527485 start.go:80] releasing machines lock for "multinode-20210526212238-510955-m02", held for 18.369819153s
	I0526 21:25:38.376005  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Calling .DriverName
	I0526 21:25:38.376167  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Calling .GetIP
	I0526 21:25:38.380405  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | domain multinode-20210526212238-510955-m02 has defined MAC address 52:54:00:9f:f1:a0 in network mk-multinode-20210526212238-510955
	I0526 21:25:38.380712  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:f1:a0", ip: ""} in network mk-multinode-20210526212238-510955: {Iface:virbr1 ExpiryTime:2021-05-26 22:25:34 +0000 UTC Type:0 Mac:52:54:00:9f:f1:a0 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:multinode-20210526212238-510955-m02 Clientid:01:52:54:00:9f:f1:a0}
	I0526 21:25:38.380745  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | domain multinode-20210526212238-510955-m02 has defined IP address 192.168.39.87 and MAC address 52:54:00:9f:f1:a0 in network mk-multinode-20210526212238-510955
	I0526 21:25:38.383172  527485 out.go:170] * Found network options:
	I0526 21:25:38.384873  527485 out.go:170]   - NO_PROXY=192.168.39.229
	W0526 21:25:38.384918  527485 proxy.go:118] fail to check proxy env: Error ip not in block
	I0526 21:25:38.384943  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Calling .DriverName
	I0526 21:25:38.385107  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Calling .DriverName
	I0526 21:25:38.385552  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Calling .DriverName
	W0526 21:25:38.385726  527485 proxy.go:118] fail to check proxy env: Error ip not in block
	I0526 21:25:38.385811  527485 preload.go:98] Checking if preload exists for k8s version v1.20.2 and runtime containerd
	I0526 21:25:38.385838  527485 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0526 21:25:38.385885  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Calling .GetSSHHostname
	I0526 21:25:38.385836  527485 preload.go:106] Found local preload: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.20.2-containerd-overlay2-amd64.tar.lz4
	I0526 21:25:38.385989  527485 ssh_runner.go:149] Run: sudo crictl images --output json
	I0526 21:25:38.386012  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Calling .GetSSHHostname
	I0526 21:25:38.390617  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | domain multinode-20210526212238-510955-m02 has defined MAC address 52:54:00:9f:f1:a0 in network mk-multinode-20210526212238-510955
	I0526 21:25:38.390995  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:f1:a0", ip: ""} in network mk-multinode-20210526212238-510955: {Iface:virbr1 ExpiryTime:2021-05-26 22:25:34 +0000 UTC Type:0 Mac:52:54:00:9f:f1:a0 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:multinode-20210526212238-510955-m02 Clientid:01:52:54:00:9f:f1:a0}
	I0526 21:25:38.391027  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | domain multinode-20210526212238-510955-m02 has defined IP address 192.168.39.87 and MAC address 52:54:00:9f:f1:a0 in network mk-multinode-20210526212238-510955
	I0526 21:25:38.391147  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Calling .GetSSHPort
	I0526 21:25:38.391291  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Calling .GetSSHKeyPath
	I0526 21:25:38.391468  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Calling .GetSSHUsername
	I0526 21:25:38.391600  527485 sshutil.go:53] new ssh client: &{IP:192.168.39.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/multinode-20210526212238-510955-m02/id_rsa Username:docker}
	I0526 21:25:38.391767  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | domain multinode-20210526212238-510955-m02 has defined MAC address 52:54:00:9f:f1:a0 in network mk-multinode-20210526212238-510955
	I0526 21:25:38.392091  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:f1:a0", ip: ""} in network mk-multinode-20210526212238-510955: {Iface:virbr1 ExpiryTime:2021-05-26 22:25:34 +0000 UTC Type:0 Mac:52:54:00:9f:f1:a0 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:multinode-20210526212238-510955-m02 Clientid:01:52:54:00:9f:f1:a0}
	I0526 21:25:38.392125  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | domain multinode-20210526212238-510955-m02 has defined IP address 192.168.39.87 and MAC address 52:54:00:9f:f1:a0 in network mk-multinode-20210526212238-510955
	I0526 21:25:38.392248  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Calling .GetSSHPort
	I0526 21:25:38.392383  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Calling .GetSSHKeyPath
	I0526 21:25:38.392515  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Calling .GetSSHUsername
	I0526 21:25:38.392646  527485 sshutil.go:53] new ssh client: &{IP:192.168.39.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/multinode-20210526212238-510955-m02/id_rsa Username:docker}
	I0526 21:25:42.476566  527485 command_runner.go:124] > {
	I0526 21:25:42.476594  527485 command_runner.go:124] >   "images": [
	I0526 21:25:42.476599  527485 command_runner.go:124] >   ]
	I0526 21:25:42.476602  527485 command_runner.go:124] > }
	I0526 21:25:42.477668  527485 command_runner.go:124] ! time="2021-05-26T21:25:38Z" level=warning msg="image connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock]. As the default settings are now deprecated, you should set the endpoint instead."
	I0526 21:25:42.477689  527485 command_runner.go:124] ! time="2021-05-26T21:25:40Z" level=error msg="connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	I0526 21:25:42.477710  527485 command_runner.go:124] ! time="2021-05-26T21:25:42Z" level=error msg="connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	I0526 21:25:42.477730  527485 ssh_runner.go:189] Completed: sudo crictl images --output json: (4.09172304s)
	I0526 21:25:42.477759  527485 containerd.go:566] couldn't find preloaded image for "k8s.gcr.io/kube-apiserver:v1.20.2". assuming images are not preloaded.
	I0526 21:25:42.477765  527485 command_runner.go:124] > <HTML><HEAD><meta http-equiv="content-type" content="text/html;charset=utf-8">
	I0526 21:25:42.477794  527485 command_runner.go:124] > <TITLE>302 Moved</TITLE></HEAD><BODY>
	I0526 21:25:42.477806  527485 command_runner.go:124] > <H1>302 Moved</H1>
	I0526 21:25:42.477810  527485 command_runner.go:124] > The document has moved
	I0526 21:25:42.477810  527485 ssh_runner.go:149] Run: which lz4
	I0526 21:25:42.477820  527485 command_runner.go:124] > <A HREF="https://cloud.google.com/container-registry/">here</A>.
	I0526 21:25:42.477831  527485 command_runner.go:124] > </BODY></HTML>
	I0526 21:25:42.477854  527485 ssh_runner.go:189] Completed: curl -sS -m 2 https://k8s.gcr.io/: (4.091991492s)
	I0526 21:25:42.482008  527485 command_runner.go:124] > /bin/lz4
	I0526 21:25:42.482258  527485 vm_assets.go:98] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.20.2-containerd-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0526 21:25:42.482333  527485 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0526 21:25:42.486505  527485 command_runner.go:124] ! stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	I0526 21:25:42.486947  527485 ssh_runner.go:306] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	I0526 21:25:42.486974  527485 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.20.2-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (953722271 bytes)
	I0526 21:25:46.520241  527485 containerd.go:503] Took 4.037934 seconds to copy over tarball
	I0526 21:25:46.520321  527485 ssh_runner.go:149] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0526 21:25:52.928908  527485 ssh_runner.go:189] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (6.408556452s)
	I0526 21:25:52.928937  527485 containerd.go:510] Took 6.408662 seconds t extract the tarball
	I0526 21:25:52.928948  527485 ssh_runner.go:100] rm: /preloaded.tar.lz4
	I0526 21:25:52.988755  527485 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0526 21:25:53.144586  527485 ssh_runner.go:149] Run: sudo systemctl restart containerd
	I0526 21:25:53.196649  527485 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0526 21:25:53.207985  527485 ssh_runner.go:149] Run: sudo systemctl stop -f crio
	I0526 21:25:53.248870  527485 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0526 21:25:53.260809  527485 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0526 21:25:53.271305  527485 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0526 21:25:53.285912  527485 command_runner.go:124] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0526 21:25:53.285938  527485 command_runner.go:124] > image-endpoint: unix:///run/containerd/containerd.sock
	I0526 21:25:53.286122  527485 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "cm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKCltncnBjXQogIGFkZHJlc3MgPSAiL3J1bi9jb250YWluZXJkL2NvbnRhaW5lcmQuc29jayIKICB1aWQgPSAwCiAgZ2lkID0gMAogIG1heF9yZWN2X21lc3NhZ2Vfc2l6ZSA9IDE2Nzc3MjE2CiAgbWF4X3NlbmRfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKCltkZWJ1Z10KICBhZGRyZXNzID0gIiIKICB1aWQgPSAwCiAgZ2lkID0gMAogIGxldmVsID0gIiIKClttZXRyaWNzXQogIGFkZHJlc3MgPSAiIgogIGdycGNfaGlzdG9ncmFtID0gZmFsc2UKCltjZ3JvdXBdCiAgcGF0aCA9ICIiCgpbcGx1Z2luc10KICBbcGx1Z2lucy5jZ3JvdXBzXQogICAgbm9fcHJvbWV0aGV1cyA9IGZhbHNlCiAgW3BsdWdpbnMuY3JpXQogICAgc3RyZWFtX3NlcnZlcl9hZGRyZXNzID0gIiIKICAgIHN0cmVhbV9zZXJ2ZXJfcG9ydCA9ICIxMDAxMCIKICAgIGVuYWJsZV9zZWxpbnV4ID0gZmFsc2UKICAgIHNhbmRib3hfaW1hZ2UgPSAiazhzLmdjci5pby9wYXVzZTozLjIiCiAgICBzdGF0c19jb2xsZWN0X3BlcmlvZCA9IDEwCiAgICBzeXN0ZW1kX2Nncm91cCA9IGZhbHNlCiAgICBlbmFibGVfdGxzX3N0cmVhbWluZyA9IGZhbHNlCiAgICBtYXhfY29udGFpbmVyX2xvZ19saW5lX3Npe
mUgPSAxNjM4NAogICAgW3BsdWdpbnMuY3JpLmNvbnRhaW5lcmRdCiAgICAgIHNuYXBzaG90dGVyID0gIm92ZXJsYXlmcyIKICAgICAgW3BsdWdpbnMuY3JpLmNvbnRhaW5lcmQuZGVmYXVsdF9ydW50aW1lXQogICAgICAgIHJ1bnRpbWVfdHlwZSA9ICJpby5jb250YWluZXJkLnJ1bmMudjIiCiAgICAgICAgW3BsdWdpbnMuY3JpLmNvbnRhaW5lcmQuZGVmYXVsdF9ydW50aW1lLm9wdGlvbnNdCiAgICAgICAgICBOb1Bpdm90Um9vdCA9IHRydWUKICAgICAgW3BsdWdpbnMuY3JpLmNvbnRhaW5lcmQudW50cnVzdGVkX3dvcmtsb2FkX3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gIiIKICAgICAgICBydW50aW1lX2VuZ2luZSA9ICIiCiAgICAgICAgcnVudGltZV9yb290ID0gIiIKICAgIFtwbHVnaW5zLmNyaS5jbmldCiAgICAgIGJpbl9kaXIgPSAiL29wdC9jbmkvYmluIgogICAgICBjb25mX2RpciA9ICIvZXRjL2NuaS9uZXQubWsiCiAgICAgIGNvbmZfdGVtcGxhdGUgPSAiIgogICAgW3BsdWdpbnMuY3JpLnJlZ2lzdHJ5XQogICAgICBbcGx1Z2lucy5jcmkucmVnaXN0cnkubWlycm9yc10KICAgICAgICBbcGx1Z2lucy5jcmkucmVnaXN0cnkubWlycm9ycy4iZG9ja2VyLmlvIl0KICAgICAgICAgIGVuZHBvaW50ID0gWyJodHRwczovL3JlZ2lzdHJ5LTEuZG9ja2VyLmlvIl0KICAgICAgICBbcGx1Z2lucy5kaWZmLXNlcnZpY2VdCiAgICBkZWZhdWx0ID0gWyJ3YWxraW5nIl0KICBbcGx1Z2lucy5zY2hlZHVsZXJdCiAgICBwYXVzZV90aHJlc2hvb
GQgPSAwLjAyCiAgICBkZWxldGlvbl90aHJlc2hvbGQgPSAwCiAgICBtdXRhdGlvbl90aHJlc2hvbGQgPSAxMDAKICAgIHNjaGVkdWxlX2RlbGF5ID0gIjBzIgogICAgc3RhcnR1cF9kZWxheSA9ICIxMDBtcyIK" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0526 21:25:53.299753  527485 command_runner.go:124] > root = "/var/lib/containerd"
	I0526 21:25:53.299768  527485 command_runner.go:124] > state = "/run/containerd"
	I0526 21:25:53.299774  527485 command_runner.go:124] > oom_score = 0
	I0526 21:25:53.299777  527485 command_runner.go:124] > [grpc]
	I0526 21:25:53.299782  527485 command_runner.go:124] >   address = "/run/containerd/containerd.sock"
	I0526 21:25:53.299788  527485 command_runner.go:124] >   uid = 0
	I0526 21:25:53.299793  527485 command_runner.go:124] >   gid = 0
	I0526 21:25:53.299800  527485 command_runner.go:124] >   max_recv_message_size = 16777216
	I0526 21:25:53.299808  527485 command_runner.go:124] >   max_send_message_size = 16777216
	I0526 21:25:53.299815  527485 command_runner.go:124] > [debug]
	I0526 21:25:53.299821  527485 command_runner.go:124] >   address = ""
	I0526 21:25:53.299834  527485 command_runner.go:124] >   uid = 0
	I0526 21:25:53.299839  527485 command_runner.go:124] >   gid = 0
	I0526 21:25:53.299844  527485 command_runner.go:124] >   level = ""
	I0526 21:25:53.299851  527485 command_runner.go:124] > [metrics]
	I0526 21:25:53.299859  527485 command_runner.go:124] >   address = ""
	I0526 21:25:53.299867  527485 command_runner.go:124] >   grpc_histogram = false
	I0526 21:25:53.299873  527485 command_runner.go:124] > [cgroup]
	I0526 21:25:53.299880  527485 command_runner.go:124] >   path = ""
	I0526 21:25:53.299887  527485 command_runner.go:124] > [plugins]
	I0526 21:25:53.299897  527485 command_runner.go:124] >   [plugins.cgroups]
	I0526 21:25:53.299907  527485 command_runner.go:124] >     no_prometheus = false
	I0526 21:25:53.299912  527485 command_runner.go:124] >   [plugins.cri]
	I0526 21:25:53.299919  527485 command_runner.go:124] >     stream_server_address = ""
	I0526 21:25:53.299930  527485 command_runner.go:124] >     stream_server_port = "10010"
	I0526 21:25:53.299938  527485 command_runner.go:124] >     enable_selinux = false
	I0526 21:25:53.299952  527485 command_runner.go:124] >     sandbox_image = "k8s.gcr.io/pause:3.2"
	I0526 21:25:53.299960  527485 command_runner.go:124] >     stats_collect_period = 10
	I0526 21:25:53.299965  527485 command_runner.go:124] >     systemd_cgroup = false
	I0526 21:25:53.299972  527485 command_runner.go:124] >     enable_tls_streaming = false
	I0526 21:25:53.299977  527485 command_runner.go:124] >     max_container_log_line_size = 16384
	I0526 21:25:53.299982  527485 command_runner.go:124] >     [plugins.cri.containerd]
	I0526 21:25:53.299987  527485 command_runner.go:124] >       snapshotter = "overlayfs"
	I0526 21:25:53.299992  527485 command_runner.go:124] >       [plugins.cri.containerd.default_runtime]
	I0526 21:25:53.299998  527485 command_runner.go:124] >         runtime_type = "io.containerd.runc.v2"
	I0526 21:25:53.300005  527485 command_runner.go:124] >         [plugins.cri.containerd.default_runtime.options]
	I0526 21:25:53.300010  527485 command_runner.go:124] >           NoPivotRoot = true
	I0526 21:25:53.300015  527485 command_runner.go:124] >       [plugins.cri.containerd.untrusted_workload_runtime]
	I0526 21:25:53.300019  527485 command_runner.go:124] >         runtime_type = ""
	I0526 21:25:53.300024  527485 command_runner.go:124] >         runtime_engine = ""
	I0526 21:25:53.300031  527485 command_runner.go:124] >         runtime_root = ""
	I0526 21:25:53.300035  527485 command_runner.go:124] >     [plugins.cri.cni]
	I0526 21:25:53.300040  527485 command_runner.go:124] >       bin_dir = "/opt/cni/bin"
	I0526 21:25:53.300045  527485 command_runner.go:124] >       conf_dir = "/etc/cni/net.mk"
	I0526 21:25:53.300049  527485 command_runner.go:124] >       conf_template = ""
	I0526 21:25:53.300053  527485 command_runner.go:124] >     [plugins.cri.registry]
	I0526 21:25:53.300058  527485 command_runner.go:124] >       [plugins.cri.registry.mirrors]
	I0526 21:25:53.300063  527485 command_runner.go:124] >         [plugins.cri.registry.mirrors."docker.io"]
	I0526 21:25:53.300071  527485 command_runner.go:124] >           endpoint = ["https://registry-1.docker.io"]
	I0526 21:25:53.300075  527485 command_runner.go:124] >         [plugins.diff-service]
	I0526 21:25:53.300081  527485 command_runner.go:124] >     default = ["walking"]
	I0526 21:25:53.300087  527485 command_runner.go:124] >   [plugins.scheduler]
	I0526 21:25:53.300091  527485 command_runner.go:124] >     pause_threshold = 0.02
	I0526 21:25:53.300095  527485 command_runner.go:124] >     deletion_threshold = 0
	I0526 21:25:53.300100  527485 command_runner.go:124] >     mutation_threshold = 100
	I0526 21:25:53.300104  527485 command_runner.go:124] >     schedule_delay = "0s"
	I0526 21:25:53.300110  527485 command_runner.go:124] >     startup_delay = "100ms"
	I0526 21:25:53.300223  527485 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0526 21:25:53.307622  527485 command_runner.go:124] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0526 21:25:53.307992  527485 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0526 21:25:53.308044  527485 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0526 21:25:53.330620  527485 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0526 21:25:53.339265  527485 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0526 21:25:53.486532  527485 ssh_runner.go:149] Run: sudo systemctl restart containerd
	I0526 21:25:57.835864  527485 ssh_runner.go:189] Completed: sudo systemctl restart containerd: (4.349289401s)
	I0526 21:25:57.835899  527485 start.go:376] Will wait 60s for socket path /run/containerd/containerd.sock
	I0526 21:25:57.835961  527485 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0526 21:25:57.844201  527485 command_runner.go:124] ! stat: cannot stat '/run/containerd/containerd.sock': No such file or directory
	I0526 21:25:57.844555  527485 retry.go:31] will retry after 1.440509088s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/run/containerd/containerd.sock': No such file or directory
	I0526 21:25:59.285247  527485 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0526 21:25:59.290171  527485 command_runner.go:124] >   File: /run/containerd/containerd.sock
	I0526 21:25:59.290197  527485 command_runner.go:124] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0526 21:25:59.290206  527485 command_runner.go:124] > Device: 14h/20d	Inode: 30867       Links: 1
	I0526 21:25:59.290217  527485 command_runner.go:124] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0526 21:25:59.290236  527485 command_runner.go:124] > Access: 2021-05-26 21:25:57.890195203 +0000
	I0526 21:25:59.290244  527485 command_runner.go:124] > Modify: 2021-05-26 21:25:57.890195203 +0000
	I0526 21:25:59.290253  527485 command_runner.go:124] > Change: 2021-05-26 21:25:57.890195203 +0000
	I0526 21:25:59.290259  527485 command_runner.go:124] >  Birth: -
	I0526 21:25:59.290557  527485 start.go:401] Will wait 60s for crictl version
	I0526 21:25:59.290617  527485 ssh_runner.go:149] Run: sudo crictl version
	I0526 21:25:59.312397  527485 command_runner.go:124] > Version:  0.1.0
	I0526 21:25:59.312778  527485 command_runner.go:124] > RuntimeName:  containerd
	I0526 21:25:59.312798  527485 command_runner.go:124] > RuntimeVersion:  v1.4.4
	I0526 21:25:59.312806  527485 command_runner.go:124] > RuntimeApiVersion:  v1alpha2
	I0526 21:25:59.313914  527485 start.go:410] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.4.4
	RuntimeApiVersion:  v1alpha2
	I0526 21:25:59.313963  527485 ssh_runner.go:149] Run: containerd --version
	I0526 21:25:59.351203  527485 command_runner.go:124] > containerd github.com/containerd/containerd v1.4.4 05f951a3781f4f2c1911b05e61c160e9c30eaa8e
	I0526 21:25:59.353503  527485 out.go:170] * Preparing Kubernetes v1.20.2 on containerd 1.4.4 ...
	I0526 21:25:59.355074  527485 out.go:170]   - env NO_PROXY=192.168.39.229
	I0526 21:25:59.355139  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Calling .GetIP
	I0526 21:25:59.360335  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | domain multinode-20210526212238-510955-m02 has defined MAC address 52:54:00:9f:f1:a0 in network mk-multinode-20210526212238-510955
	I0526 21:25:59.360736  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:f1:a0", ip: ""} in network mk-multinode-20210526212238-510955: {Iface:virbr1 ExpiryTime:2021-05-26 22:25:34 +0000 UTC Type:0 Mac:52:54:00:9f:f1:a0 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:multinode-20210526212238-510955-m02 Clientid:01:52:54:00:9f:f1:a0}
	I0526 21:25:59.360779  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | domain multinode-20210526212238-510955-m02 has defined IP address 192.168.39.87 and MAC address 52:54:00:9f:f1:a0 in network mk-multinode-20210526212238-510955
	I0526 21:25:59.360938  527485 ssh_runner.go:149] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0526 21:25:59.364826  527485 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0526 21:25:59.375106  527485 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/multinode-20210526212238-510955 for IP: 192.168.39.87
	I0526 21:25:59.375153  527485 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/ca.key
	I0526 21:25:59.375169  527485 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/proxy-client-ca.key
	I0526 21:25:59.375182  527485 vm_assets.go:98] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0526 21:25:59.375194  527485 vm_assets.go:98] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0526 21:25:59.375205  527485 vm_assets.go:98] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0526 21:25:59.375216  527485 vm_assets.go:98] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0526 21:25:59.375268  527485 certs.go:369] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/510955.pem (1338 bytes)
	W0526 21:25:59.375312  527485 certs.go:365] ignoring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/510955_empty.pem, impossibly tiny 0 bytes
	I0526 21:25:59.375330  527485 certs.go:369] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/ca-key.pem (1675 bytes)
	I0526 21:25:59.375356  527485 certs.go:369] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/ca.pem (1078 bytes)
	I0526 21:25:59.375379  527485 certs.go:369] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/cert.pem (1123 bytes)
	I0526 21:25:59.375401  527485 certs.go:369] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/key.pem (1679 bytes)
	I0526 21:25:59.375427  527485 vm_assets.go:98] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0526 21:25:59.375456  527485 vm_assets.go:98] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/510955.pem -> /usr/share/ca-certificates/510955.pem
	I0526 21:25:59.375838  527485 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0526 21:25:59.392253  527485 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0526 21:25:59.407698  527485 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0526 21:25:59.423260  527485 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0526 21:25:59.439266  527485 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0526 21:25:59.454821  527485 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/510955.pem --> /usr/share/ca-certificates/510955.pem (1338 bytes)
	I0526 21:25:59.470919  527485 ssh_runner.go:149] Run: openssl version
	I0526 21:25:59.476270  527485 command_runner.go:124] > OpenSSL 1.1.1k  25 Mar 2021
	I0526 21:25:59.476758  527485 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0526 21:25:59.484115  527485 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0526 21:25:59.488098  527485 command_runner.go:124] > -rw-r--r-- 1 root root 1111 May 26 20:40 /usr/share/ca-certificates/minikubeCA.pem
	I0526 21:25:59.488277  527485 certs.go:410] hashing: -rw-r--r-- 1 root root 1111 May 26 20:40 /usr/share/ca-certificates/minikubeCA.pem
	I0526 21:25:59.488330  527485 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0526 21:25:59.493735  527485 command_runner.go:124] > b5213941
	I0526 21:25:59.494057  527485 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0526 21:25:59.501464  527485 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/510955.pem && ln -fs /usr/share/ca-certificates/510955.pem /etc/ssl/certs/510955.pem"
	I0526 21:25:59.509156  527485 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/510955.pem
	I0526 21:25:59.513315  527485 command_runner.go:124] > -rw-r--r-- 1 root root 1338 May 26 21:12 /usr/share/ca-certificates/510955.pem
	I0526 21:25:59.513663  527485 certs.go:410] hashing: -rw-r--r-- 1 root root 1338 May 26 21:12 /usr/share/ca-certificates/510955.pem
	I0526 21:25:59.513696  527485 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/510955.pem
	I0526 21:25:59.519863  527485 command_runner.go:124] > 51391683
	I0526 21:25:59.520201  527485 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/510955.pem /etc/ssl/certs/51391683.0"
	I0526 21:25:59.527724  527485 ssh_runner.go:149] Run: sudo crictl info
	I0526 21:25:59.551101  527485 command_runner.go:124] > {
	I0526 21:25:59.551120  527485 command_runner.go:124] >   "status": {
	I0526 21:25:59.551127  527485 command_runner.go:124] >     "conditions": [
	I0526 21:25:59.551132  527485 command_runner.go:124] >       {
	I0526 21:25:59.551139  527485 command_runner.go:124] >         "type": "RuntimeReady",
	I0526 21:25:59.551148  527485 command_runner.go:124] >         "status": true,
	I0526 21:25:59.551154  527485 command_runner.go:124] >         "reason": "",
	I0526 21:25:59.551161  527485 command_runner.go:124] >         "message": ""
	I0526 21:25:59.551168  527485 command_runner.go:124] >       },
	I0526 21:25:59.551173  527485 command_runner.go:124] >       {
	I0526 21:25:59.551180  527485 command_runner.go:124] >         "type": "NetworkReady",
	I0526 21:25:59.551190  527485 command_runner.go:124] >         "status": false,
	I0526 21:25:59.551201  527485 command_runner.go:124] >         "reason": "NetworkPluginNotReady",
	I0526 21:25:59.551212  527485 command_runner.go:124] >         "message": "Network plugin returns error: cni plugin not initialized"
	I0526 21:25:59.551226  527485 command_runner.go:124] >       }
	I0526 21:25:59.551231  527485 command_runner.go:124] >     ]
	I0526 21:25:59.551235  527485 command_runner.go:124] >   },
	I0526 21:25:59.551240  527485 command_runner.go:124] >   "cniconfig": {
	I0526 21:25:59.551245  527485 command_runner.go:124] >     "PluginDirs": [
	I0526 21:25:59.551250  527485 command_runner.go:124] >       "/opt/cni/bin"
	I0526 21:25:59.551254  527485 command_runner.go:124] >     ],
	I0526 21:25:59.551260  527485 command_runner.go:124] >     "PluginConfDir": "/etc/cni/net.mk",
	I0526 21:25:59.551269  527485 command_runner.go:124] >     "PluginMaxConfNum": 1,
	I0526 21:25:59.551274  527485 command_runner.go:124] >     "Prefix": "eth",
	I0526 21:25:59.551282  527485 command_runner.go:124] >     "Networks": [
	I0526 21:25:59.551288  527485 command_runner.go:124] >       {
	I0526 21:25:59.551294  527485 command_runner.go:124] >         "Config": {
	I0526 21:25:59.551308  527485 command_runner.go:124] >           "Name": "cni-loopback",
	I0526 21:25:59.551318  527485 command_runner.go:124] >           "CNIVersion": "0.3.1",
	I0526 21:25:59.551325  527485 command_runner.go:124] >           "Plugins": [
	I0526 21:25:59.551331  527485 command_runner.go:124] >             {
	I0526 21:25:59.551337  527485 command_runner.go:124] >               "Network": {
	I0526 21:25:59.551349  527485 command_runner.go:124] >                 "type": "loopback",
	I0526 21:25:59.551358  527485 command_runner.go:124] >                 "ipam": {},
	I0526 21:25:59.551366  527485 command_runner.go:124] >                 "dns": {}
	I0526 21:25:59.551373  527485 command_runner.go:124] >               },
	I0526 21:25:59.551381  527485 command_runner.go:124] >               "Source": "{\"type\":\"loopback\"}"
	I0526 21:25:59.551390  527485 command_runner.go:124] >             }
	I0526 21:25:59.551396  527485 command_runner.go:124] >           ],
	I0526 21:25:59.551411  527485 command_runner.go:124] >           "Source": "{\n\"cniVersion\": \"0.3.1\",\n\"name\": \"cni-loopback\",\n\"plugins\": [{\n  \"type\": \"loopback\"\n}]\n}"
	I0526 21:25:59.551419  527485 command_runner.go:124] >         },
	I0526 21:25:59.551426  527485 command_runner.go:124] >         "IFName": "lo"
	I0526 21:25:59.551434  527485 command_runner.go:124] >       }
	I0526 21:25:59.551439  527485 command_runner.go:124] >     ]
	I0526 21:25:59.551446  527485 command_runner.go:124] >   },
	I0526 21:25:59.551452  527485 command_runner.go:124] >   "config": {
	I0526 21:25:59.551465  527485 command_runner.go:124] >     "containerd": {
	I0526 21:25:59.551473  527485 command_runner.go:124] >       "snapshotter": "overlayfs",
	I0526 21:25:59.551481  527485 command_runner.go:124] >       "defaultRuntimeName": "default",
	I0526 21:25:59.551491  527485 command_runner.go:124] >       "defaultRuntime": {
	I0526 21:25:59.551499  527485 command_runner.go:124] >         "runtimeType": "io.containerd.runc.v2",
	I0526 21:25:59.551509  527485 command_runner.go:124] >         "runtimeEngine": "",
	I0526 21:25:59.551517  527485 command_runner.go:124] >         "PodAnnotations": null,
	I0526 21:25:59.551527  527485 command_runner.go:124] >         "ContainerAnnotations": null,
	I0526 21:25:59.551535  527485 command_runner.go:124] >         "runtimeRoot": "",
	I0526 21:25:59.551544  527485 command_runner.go:124] >         "options": {},
	I0526 21:25:59.551553  527485 command_runner.go:124] >         "privileged_without_host_devices": false,
	I0526 21:25:59.551563  527485 command_runner.go:124] >         "baseRuntimeSpec": ""
	I0526 21:25:59.551569  527485 command_runner.go:124] >       },
	I0526 21:25:59.551578  527485 command_runner.go:124] >       "untrustedWorkloadRuntime": {
	I0526 21:25:59.551585  527485 command_runner.go:124] >         "runtimeType": "",
	I0526 21:25:59.551593  527485 command_runner.go:124] >         "runtimeEngine": "",
	I0526 21:25:59.551600  527485 command_runner.go:124] >         "PodAnnotations": null,
	I0526 21:25:59.551611  527485 command_runner.go:124] >         "ContainerAnnotations": null,
	I0526 21:25:59.551619  527485 command_runner.go:124] >         "runtimeRoot": "",
	I0526 21:25:59.551627  527485 command_runner.go:124] >         "options": null,
	I0526 21:25:59.551636  527485 command_runner.go:124] >         "privileged_without_host_devices": false,
	I0526 21:25:59.551646  527485 command_runner.go:124] >         "baseRuntimeSpec": ""
	I0526 21:25:59.551652  527485 command_runner.go:124] >       },
	I0526 21:25:59.551660  527485 command_runner.go:124] >       "runtimes": {
	I0526 21:25:59.551666  527485 command_runner.go:124] >         "default": {
	I0526 21:25:59.551675  527485 command_runner.go:124] >           "runtimeType": "io.containerd.runc.v2",
	I0526 21:25:59.551685  527485 command_runner.go:124] >           "runtimeEngine": "",
	I0526 21:25:59.551692  527485 command_runner.go:124] >           "PodAnnotations": null,
	I0526 21:25:59.551705  527485 command_runner.go:124] >           "ContainerAnnotations": null,
	I0526 21:25:59.551713  527485 command_runner.go:124] >           "runtimeRoot": "",
	I0526 21:25:59.551720  527485 command_runner.go:124] >           "options": {},
	I0526 21:25:59.551731  527485 command_runner.go:124] >           "privileged_without_host_devices": false,
	I0526 21:25:59.551740  527485 command_runner.go:124] >           "baseRuntimeSpec": ""
	I0526 21:25:59.551748  527485 command_runner.go:124] >         },
	I0526 21:25:59.551754  527485 command_runner.go:124] >         "runc": {
	I0526 21:25:59.551764  527485 command_runner.go:124] >           "runtimeType": "io.containerd.runc.v2",
	I0526 21:25:59.551771  527485 command_runner.go:124] >           "runtimeEngine": "",
	I0526 21:25:59.551782  527485 command_runner.go:124] >           "PodAnnotations": null,
	I0526 21:25:59.551791  527485 command_runner.go:124] >           "ContainerAnnotations": null,
	I0526 21:25:59.551800  527485 command_runner.go:124] >           "runtimeRoot": "",
	I0526 21:25:59.551807  527485 command_runner.go:124] >           "options": {},
	I0526 21:25:59.551818  527485 command_runner.go:124] >           "privileged_without_host_devices": false,
	I0526 21:25:59.551826  527485 command_runner.go:124] >           "baseRuntimeSpec": ""
	I0526 21:25:59.551834  527485 command_runner.go:124] >         }
	I0526 21:25:59.551841  527485 command_runner.go:124] >       },
	I0526 21:25:59.551848  527485 command_runner.go:124] >       "noPivot": false,
	I0526 21:25:59.551856  527485 command_runner.go:124] >       "disableSnapshotAnnotations": true,
	I0526 21:25:59.551869  527485 command_runner.go:124] >       "discardUnpackedLayers": false
	I0526 21:25:59.551875  527485 command_runner.go:124] >     },
	I0526 21:25:59.551881  527485 command_runner.go:124] >     "cni": {
	I0526 21:25:59.551888  527485 command_runner.go:124] >       "binDir": "/opt/cni/bin",
	I0526 21:25:59.551896  527485 command_runner.go:124] >       "confDir": "/etc/cni/net.mk",
	I0526 21:25:59.551902  527485 command_runner.go:124] >       "maxConfNum": 1,
	I0526 21:25:59.551910  527485 command_runner.go:124] >       "confTemplate": ""
	I0526 21:25:59.551915  527485 command_runner.go:124] >     },
	I0526 21:25:59.551922  527485 command_runner.go:124] >     "registry": {
	I0526 21:25:59.551928  527485 command_runner.go:124] >       "mirrors": {
	I0526 21:25:59.551935  527485 command_runner.go:124] >         "docker.io": {
	I0526 21:25:59.551941  527485 command_runner.go:124] >           "endpoint": [
	I0526 21:25:59.551952  527485 command_runner.go:124] >             "https://registry-1.docker.io"
	I0526 21:25:59.551958  527485 command_runner.go:124] >           ]
	I0526 21:25:59.551965  527485 command_runner.go:124] >         }
	I0526 21:25:59.551970  527485 command_runner.go:124] >       },
	I0526 21:25:59.551976  527485 command_runner.go:124] >       "configs": null,
	I0526 21:25:59.551983  527485 command_runner.go:124] >       "auths": null,
	I0526 21:25:59.551989  527485 command_runner.go:124] >       "headers": null
	I0526 21:25:59.551996  527485 command_runner.go:124] >     },
	I0526 21:25:59.552002  527485 command_runner.go:124] >     "imageDecryption": {
	I0526 21:25:59.552012  527485 command_runner.go:124] >       "keyModel": ""
	I0526 21:25:59.552018  527485 command_runner.go:124] >     },
	I0526 21:25:59.552026  527485 command_runner.go:124] >     "disableTCPService": true,
	I0526 21:25:59.552033  527485 command_runner.go:124] >     "streamServerAddress": "",
	I0526 21:25:59.552042  527485 command_runner.go:124] >     "streamServerPort": "10010",
	I0526 21:25:59.552051  527485 command_runner.go:124] >     "streamIdleTimeout": "4h0m0s",
	I0526 21:25:59.552058  527485 command_runner.go:124] >     "enableSelinux": false,
	I0526 21:25:59.552068  527485 command_runner.go:124] >     "selinuxCategoryRange": 1024,
	I0526 21:25:59.552076  527485 command_runner.go:124] >     "sandboxImage": "k8s.gcr.io/pause:3.2",
	I0526 21:25:59.552085  527485 command_runner.go:124] >     "statsCollectPeriod": 10,
	I0526 21:25:59.552092  527485 command_runner.go:124] >     "systemdCgroup": false,
	I0526 21:25:59.552100  527485 command_runner.go:124] >     "enableTLSStreaming": false,
	I0526 21:25:59.552106  527485 command_runner.go:124] >     "x509KeyPairStreaming": {
	I0526 21:25:59.552114  527485 command_runner.go:124] >       "tlsCertFile": "",
	I0526 21:25:59.552120  527485 command_runner.go:124] >       "tlsKeyFile": ""
	I0526 21:25:59.552126  527485 command_runner.go:124] >     },
	I0526 21:25:59.552133  527485 command_runner.go:124] >     "maxContainerLogSize": 16384,
	I0526 21:25:59.552143  527485 command_runner.go:124] >     "disableCgroup": false,
	I0526 21:25:59.552151  527485 command_runner.go:124] >     "disableApparmor": false,
	I0526 21:25:59.552158  527485 command_runner.go:124] >     "restrictOOMScoreAdj": false,
	I0526 21:25:59.552167  527485 command_runner.go:124] >     "maxConcurrentDownloads": 3,
	I0526 21:25:59.552174  527485 command_runner.go:124] >     "disableProcMount": false,
	I0526 21:25:59.552182  527485 command_runner.go:124] >     "unsetSeccompProfile": "",
	I0526 21:25:59.552189  527485 command_runner.go:124] >     "tolerateMissingHugetlbController": true,
	I0526 21:25:59.552199  527485 command_runner.go:124] >     "disableHugetlbController": true,
	I0526 21:25:59.552207  527485 command_runner.go:124] >     "ignoreImageDefinedVolumes": false,
	I0526 21:25:59.552218  527485 command_runner.go:124] >     "containerdRootDir": "/mnt/vda1/var/lib/containerd",
	I0526 21:25:59.552229  527485 command_runner.go:124] >     "containerdEndpoint": "/run/containerd/containerd.sock",
	I0526 21:25:59.552240  527485 command_runner.go:124] >     "rootDir": "/mnt/vda1/var/lib/containerd/io.containerd.grpc.v1.cri",
	I0526 21:25:59.552252  527485 command_runner.go:124] >     "stateDir": "/run/containerd/io.containerd.grpc.v1.cri"
	I0526 21:25:59.552258  527485 command_runner.go:124] >   },
	I0526 21:25:59.552264  527485 command_runner.go:124] >   "golang": "go1.13.15",
	I0526 21:25:59.552324  527485 command_runner.go:124] >   "lastCNILoadStatus": "cni config load failed: no network config found in /etc/cni/net.mk: cni plugin not initialized: failed to load cni config"
	I0526 21:25:59.552334  527485 command_runner.go:124] > }
	I0526 21:25:59.552961  527485 cni.go:93] Creating CNI manager for ""
	I0526 21:25:59.552978  527485 cni.go:154] 2 nodes found, recommending kindnet
	I0526 21:25:59.552991  527485 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0526 21:25:59.553008  527485 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.87 APIServerPort:8443 KubernetesVersion:v1.20.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-20210526212238-510955 NodeName:multinode-20210526212238-510955-m02 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.229"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.39.87 CgroupDriver:cgroupfs
ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0526 21:25:59.553130  527485 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.87
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "multinode-20210526212238-510955-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.87
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.229"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	
	I0526 21:25:59.553214  527485 kubeadm.go:909] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=multinode-20210526212238-510955-m02 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.87 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.2 ClusterName:multinode-20210526212238-510955 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0526 21:25:59.553264  527485 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.20.2
	I0526 21:25:59.560597  527485 command_runner.go:124] > kubeadm
	I0526 21:25:59.560615  527485 command_runner.go:124] > kubectl
	I0526 21:25:59.560620  527485 command_runner.go:124] > kubelet
	I0526 21:25:59.560934  527485 binaries.go:44] Found k8s binaries, skipping transfer
	I0526 21:25:59.560975  527485 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0526 21:25:59.567380  527485 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (581 bytes)
	I0526 21:25:59.578931  527485 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0526 21:25:59.590340  527485 ssh_runner.go:149] Run: grep 192.168.39.229	control-plane.minikube.internal$ /etc/hosts
	I0526 21:25:59.594107  527485 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.229	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0526 21:25:59.604129  527485 host.go:66] Checking if "multinode-20210526212238-510955" exists ...
	I0526 21:25:59.604413  527485 cache.go:108] acquiring lock: {Name:mk0fbd6526c48f14b253d250dd93663316e68dc6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0526 21:25:59.604550  527485 main.go:128] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0526 21:25:59.604588  527485 main.go:128] libmachine: Launching plugin server for driver kvm2
	I0526 21:25:59.604557  527485 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/minikube-local-cache-test_functional-20210526211257-510955 exists
	I0526 21:25:59.604679  527485 cache.go:97] cache image "minikube-local-cache-test:functional-20210526211257-510955" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/minikube-local-cache-test_functional-20210526211257-510955" took 272.85µs
	I0526 21:25:59.604703  527485 cache.go:81] save to tar file minikube-local-cache-test:functional-20210526211257-510955 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/minikube-local-cache-test_functional-20210526211257-510955 succeeded
	I0526 21:25:59.604717  527485 cache.go:88] Successfully saved all images to host disk.
	I0526 21:25:59.605163  527485 main.go:128] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0526 21:25:59.605206  527485 main.go:128] libmachine: Launching plugin server for driver kvm2
	I0526 21:25:59.616711  527485 main.go:128] libmachine: Plugin server listening at address 127.0.0.1:42125
	I0526 21:25:59.617147  527485 main.go:128] libmachine: () Calling .GetVersion
	I0526 21:25:59.617659  527485 main.go:128] libmachine: Using API Version  1
	I0526 21:25:59.617683  527485 main.go:128] libmachine: () Calling .SetConfigRaw
	I0526 21:25:59.618111  527485 main.go:128] libmachine: () Calling .GetMachineName
	I0526 21:25:59.618296  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .DriverName
	I0526 21:25:59.618446  527485 start.go:224] JoinCluster: &{Name:multinode-20210526212238-510955 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.20.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.22-1620785771-11384@sha256:f5844fe35994179bbad8dda27d4912304a2fedccdf0bf93ce8b2ec2b3b83af1c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:multinode-20210526
212238-510955 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.229 Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.87 Port:0 KubernetesVersion:v1.20.2 ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:true}
	I0526 21:25:59.618550  527485 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm token create --print-join-command --ttl=0"
	I0526 21:25:59.618568  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetSSHHostname
	I0526 21:25:59.620352  527485 main.go:128] libmachine: Plugin server listening at address 127.0.0.1:38883
	I0526 21:25:59.620740  527485 main.go:128] libmachine: () Calling .GetVersion
	I0526 21:25:59.621147  527485 main.go:128] libmachine: Using API Version  1
	I0526 21:25:59.621170  527485 main.go:128] libmachine: () Calling .SetConfigRaw
	I0526 21:25:59.621478  527485 main.go:128] libmachine: () Calling .GetMachineName
	I0526 21:25:59.621674  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetState
	I0526 21:25:59.624725  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | domain multinode-20210526212238-510955 has defined MAC address 52:54:00:0c:8b:34 in network mk-multinode-20210526212238-510955
	I0526 21:25:59.625091  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:8b:34", ip: ""} in network mk-multinode-20210526212238-510955: {Iface:virbr1 ExpiryTime:2021-05-26 22:22:53 +0000 UTC Type:0 Mac:52:54:00:0c:8b:34 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:multinode-20210526212238-510955 Clientid:01:52:54:00:0c:8b:34}
	I0526 21:25:59.625127  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | domain multinode-20210526212238-510955 has defined IP address 192.168.39.229 and MAC address 52:54:00:0c:8b:34 in network mk-multinode-20210526212238-510955
	I0526 21:25:59.625241  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetSSHPort
	I0526 21:25:59.625423  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetSSHKeyPath
	I0526 21:25:59.625584  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetSSHUsername
	I0526 21:25:59.625735  527485 sshutil.go:53] new ssh client: &{IP:192.168.39.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/multinode-20210526212238-510955/id_rsa Username:docker}
	I0526 21:25:59.625912  527485 main.go:128] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0526 21:25:59.625956  527485 main.go:128] libmachine: Launching plugin server for driver kvm2
	I0526 21:25:59.636036  527485 main.go:128] libmachine: Plugin server listening at address 127.0.0.1:39531
	I0526 21:25:59.636433  527485 main.go:128] libmachine: () Calling .GetVersion
	I0526 21:25:59.636888  527485 main.go:128] libmachine: Using API Version  1
	I0526 21:25:59.636914  527485 main.go:128] libmachine: () Calling .SetConfigRaw
	I0526 21:25:59.637199  527485 main.go:128] libmachine: () Calling .GetMachineName
	I0526 21:25:59.637348  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .DriverName
	I0526 21:25:59.637531  527485 ssh_runner.go:149] Run: sudo crictl images --output json
	I0526 21:25:59.637555  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetSSHHostname
	I0526 21:25:59.642278  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | domain multinode-20210526212238-510955 has defined MAC address 52:54:00:0c:8b:34 in network mk-multinode-20210526212238-510955
	I0526 21:25:59.642595  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:8b:34", ip: ""} in network mk-multinode-20210526212238-510955: {Iface:virbr1 ExpiryTime:2021-05-26 22:22:53 +0000 UTC Type:0 Mac:52:54:00:0c:8b:34 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:multinode-20210526212238-510955 Clientid:01:52:54:00:0c:8b:34}
	I0526 21:25:59.642624  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | domain multinode-20210526212238-510955 has defined IP address 192.168.39.229 and MAC address 52:54:00:0c:8b:34 in network mk-multinode-20210526212238-510955
	I0526 21:25:59.642765  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetSSHPort
	I0526 21:25:59.642942  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetSSHKeyPath
	I0526 21:25:59.643094  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetSSHUsername
	I0526 21:25:59.643231  527485 sshutil.go:53] new ssh client: &{IP:192.168.39.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/multinode-20210526212238-510955/id_rsa Username:docker}
	I0526 21:25:59.864815  527485 command_runner.go:124] > kubeadm join control-plane.minikube.internal:8443 --token ch1ot4.9etgzhm4zh9wn897     --discovery-token-ca-cert-hash sha256:12858510f46d14420576d9acdde7779529e8255fb2d74cf18105715622c3cace 
	I0526 21:25:59.866688  527485 command_runner.go:124] > {
	I0526 21:25:59.866706  527485 command_runner.go:124] >   "images": [
	I0526 21:25:59.866710  527485 command_runner.go:124] >     {
	I0526 21:25:59.866722  527485 command_runner.go:124] >       "id": "sha256:6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb",
	I0526 21:25:59.866731  527485 command_runner.go:124] >       "repoTags": [
	I0526 21:25:59.866741  527485 command_runner.go:124] >         "docker.io/kindest/kindnetd:v20210326-1e038dc5"
	I0526 21:25:59.866766  527485 command_runner.go:124] >       ],
	I0526 21:25:59.866773  527485 command_runner.go:124] >       "repoDigests": [
	I0526 21:25:59.866789  527485 command_runner.go:124] >         "docker.io/kindest/kindnetd@sha256:838bc1706e38391aefaa31fd52619fe8e57ad3dfb0d0ff414d902367fcc24c3c"
	I0526 21:25:59.866800  527485 command_runner.go:124] >       ],
	I0526 21:25:59.866807  527485 command_runner.go:124] >       "size": "53960776",
	I0526 21:25:59.866813  527485 command_runner.go:124] >       "uid": null,
	I0526 21:25:59.866818  527485 command_runner.go:124] >       "username": "",
	I0526 21:25:59.866825  527485 command_runner.go:124] >       "spec": null
	I0526 21:25:59.866831  527485 command_runner.go:124] >     },
	I0526 21:25:59.866837  527485 command_runner.go:124] >     {
	I0526 21:25:59.866852  527485 command_runner.go:124] >       "id": "sha256:9a07b5b4bfac07e5cfc27f76c34516a3ad2fdfa3f683f375141fe662ef2e72db",
	I0526 21:25:59.866861  527485 command_runner.go:124] >       "repoTags": [
	I0526 21:25:59.866870  527485 command_runner.go:124] >         "docker.io/kubernetesui/dashboard:v2.1.0"
	I0526 21:25:59.866879  527485 command_runner.go:124] >       ],
	I0526 21:25:59.866885  527485 command_runner.go:124] >       "repoDigests": [
	I0526 21:25:59.866899  527485 command_runner.go:124] >         "docker.io/kubernetesui/dashboard@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6"
	I0526 21:25:59.866907  527485 command_runner.go:124] >       ],
	I0526 21:25:59.866914  527485 command_runner.go:124] >       "size": "67992170",
	I0526 21:25:59.866922  527485 command_runner.go:124] >       "uid": null,
	I0526 21:25:59.866932  527485 command_runner.go:124] >       "username": "nonroot",
	I0526 21:25:59.866921  527485 start.go:245] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.39.87 Port:0 KubernetesVersion:v1.20.2 ControlPlane:false Worker:true}
	I0526 21:25:59.866961  527485 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm join control-plane.minikube.internal:8443 --token ch1ot4.9etgzhm4zh9wn897     --discovery-token-ca-cert-hash sha256:12858510f46d14420576d9acdde7779529e8255fb2d74cf18105715622c3cace --ignore-preflight-errors=all --cri-socket /run/containerd/containerd.sock --node-name=multinode-20210526212238-510955-m02"
	I0526 21:25:59.866939  527485 command_runner.go:124] >       "spec": null
	I0526 21:25:59.867016  527485 command_runner.go:124] >     },
	I0526 21:25:59.867021  527485 command_runner.go:124] >     {
	I0526 21:25:59.867029  527485 command_runner.go:124] >       "id": "sha256:86262685d9abb35698a4e03ed13f9ded5b97c6c85b466285e4f367e5232eeee4",
	I0526 21:25:59.867033  527485 command_runner.go:124] >       "repoTags": [
	I0526 21:25:59.867041  527485 command_runner.go:124] >         "docker.io/kubernetesui/metrics-scraper:v1.0.4"
	I0526 21:25:59.867047  527485 command_runner.go:124] >       ],
	I0526 21:25:59.867060  527485 command_runner.go:124] >       "repoDigests": [
	I0526 21:25:59.867074  527485 command_runner.go:124] >         "docker.io/kubernetesui/metrics-scraper@sha256:555981a24f184420f3be0c79d4efb6c948a85cfce84034f85a563f4151a81cbf"
	I0526 21:25:59.867090  527485 command_runner.go:124] >       ],
	I0526 21:25:59.867097  527485 command_runner.go:124] >       "size": "16020077",
	I0526 21:25:59.867106  527485 command_runner.go:124] >       "uid": null,
	I0526 21:25:59.867112  527485 command_runner.go:124] >       "username": "",
	I0526 21:25:59.867123  527485 command_runner.go:124] >       "spec": null
	I0526 21:25:59.867128  527485 command_runner.go:124] >     },
	I0526 21:25:59.867134  527485 command_runner.go:124] >     {
	I0526 21:25:59.867145  527485 command_runner.go:124] >       "id": "sha256:d019ff3187ef5660d1df17b8caf469d5fc50b72267134348e040397c4d49d830",
	I0526 21:25:59.867155  527485 command_runner.go:124] >       "repoTags": [
	I0526 21:25:59.867166  527485 command_runner.go:124] >         "docker.io/library/minikube-local-cache-test:functional-20210526211257-510955"
	I0526 21:25:59.867174  527485 command_runner.go:124] >       ],
	I0526 21:25:59.867181  527485 command_runner.go:124] >       "repoDigests": [
	I0526 21:25:59.867189  527485 command_runner.go:124] >       ],
	I0526 21:25:59.867196  527485 command_runner.go:124] >       "size": "1737",
	I0526 21:25:59.867205  527485 command_runner.go:124] >       "uid": null,
	I0526 21:25:59.867211  527485 command_runner.go:124] >       "username": "",
	I0526 21:25:59.867218  527485 command_runner.go:124] >       "spec": null
	I0526 21:25:59.867224  527485 command_runner.go:124] >     },
	I0526 21:25:59.867228  527485 command_runner.go:124] >     {
	I0526 21:25:59.867236  527485 command_runner.go:124] >       "id": "sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0526 21:25:59.867243  527485 command_runner.go:124] >       "repoTags": [
	I0526 21:25:59.867251  527485 command_runner.go:124] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0526 21:25:59.867260  527485 command_runner.go:124] >       ],
	I0526 21:25:59.867266  527485 command_runner.go:124] >       "repoDigests": [
	I0526 21:25:59.867279  527485 command_runner.go:124] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I0526 21:25:59.867295  527485 command_runner.go:124] >       ],
	I0526 21:25:59.867303  527485 command_runner.go:124] >       "size": "9058936",
	I0526 21:25:59.867312  527485 command_runner.go:124] >       "uid": null,
	I0526 21:25:59.867318  527485 command_runner.go:124] >       "username": "",
	I0526 21:25:59.867327  527485 command_runner.go:124] >       "spec": null
	I0526 21:25:59.867340  527485 command_runner.go:124] >     },
	I0526 21:25:59.867350  527485 command_runner.go:124] >     {
	I0526 21:25:59.867367  527485 command_runner.go:124] >       "id": "sha256:bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16",
	I0526 21:25:59.867377  527485 command_runner.go:124] >       "repoTags": [
	I0526 21:25:59.867384  527485 command_runner.go:124] >         "k8s.gcr.io/coredns:1.7.0"
	I0526 21:25:59.867393  527485 command_runner.go:124] >       ],
	I0526 21:25:59.867399  527485 command_runner.go:124] >       "repoDigests": [
	I0526 21:25:59.867414  527485 command_runner.go:124] >         "k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c"
	I0526 21:25:59.867421  527485 command_runner.go:124] >       ],
	I0526 21:25:59.867426  527485 command_runner.go:124] >       "size": "13982350",
	I0526 21:25:59.867434  527485 command_runner.go:124] >       "uid": null,
	I0526 21:25:59.867440  527485 command_runner.go:124] >       "username": "",
	I0526 21:25:59.867448  527485 command_runner.go:124] >       "spec": null
	I0526 21:25:59.867454  527485 command_runner.go:124] >     },
	I0526 21:25:59.867462  527485 command_runner.go:124] >     {
	I0526 21:25:59.867473  527485 command_runner.go:124] >       "id": "sha256:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934",
	I0526 21:25:59.867482  527485 command_runner.go:124] >       "repoTags": [
	I0526 21:25:59.867489  527485 command_runner.go:124] >         "k8s.gcr.io/etcd:3.4.13-0"
	I0526 21:25:59.867498  527485 command_runner.go:124] >       ],
	I0526 21:25:59.867504  527485 command_runner.go:124] >       "repoDigests": [
	I0526 21:25:59.867518  527485 command_runner.go:124] >         "k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2"
	I0526 21:25:59.867522  527485 command_runner.go:124] >       ],
	I0526 21:25:59.867531  527485 command_runner.go:124] >       "size": "86742272",
	I0526 21:25:59.867537  527485 command_runner.go:124] >       "uid": null,
	I0526 21:25:59.867544  527485 command_runner.go:124] >       "username": "",
	I0526 21:25:59.867552  527485 command_runner.go:124] >       "spec": null
	I0526 21:25:59.867556  527485 command_runner.go:124] >     },
	I0526 21:25:59.867561  527485 command_runner.go:124] >     {
	I0526 21:25:59.867573  527485 command_runner.go:124] >       "id": "sha256:a8c2fdb8bf76e3b014d14ce69a6a2d11044cb13b4ec3185015c582b8ad69a820",
	I0526 21:25:59.867580  527485 command_runner.go:124] >       "repoTags": [
	I0526 21:25:59.867587  527485 command_runner.go:124] >         "k8s.gcr.io/kube-apiserver:v1.20.2"
	I0526 21:25:59.867596  527485 command_runner.go:124] >       ],
	I0526 21:25:59.867602  527485 command_runner.go:124] >       "repoDigests": [
	I0526 21:25:59.867614  527485 command_runner.go:124] >         "k8s.gcr.io/kube-apiserver@sha256:465ba895d578fbc1c6e299e45689381fd01c54400beba9e8f1d7456077411411"
	I0526 21:25:59.867622  527485 command_runner.go:124] >       ],
	I0526 21:25:59.867629  527485 command_runner.go:124] >       "size": "30411317",
	I0526 21:25:59.867635  527485 command_runner.go:124] >       "uid": {
	I0526 21:25:59.867641  527485 command_runner.go:124] >         "value": "0"
	I0526 21:25:59.867649  527485 command_runner.go:124] >       },
	I0526 21:25:59.867656  527485 command_runner.go:124] >       "username": "",
	I0526 21:25:59.867664  527485 command_runner.go:124] >       "spec": null
	I0526 21:25:59.867669  527485 command_runner.go:124] >     },
	I0526 21:25:59.867674  527485 command_runner.go:124] >     {
	I0526 21:25:59.867685  527485 command_runner.go:124] >       "id": "sha256:a27166429d98e07152ca71420931142127609f715925b1607acee6ea6f0e3696",
	I0526 21:25:59.867691  527485 command_runner.go:124] >       "repoTags": [
	I0526 21:25:59.867700  527485 command_runner.go:124] >         "k8s.gcr.io/kube-controller-manager:v1.20.2"
	I0526 21:25:59.867706  527485 command_runner.go:124] >       ],
	I0526 21:25:59.867712  527485 command_runner.go:124] >       "repoDigests": [
	I0526 21:25:59.867725  527485 command_runner.go:124] >         "k8s.gcr.io/kube-controller-manager@sha256:842a071d4ad49b0018f7f7404ac8a4ddfc2bce2ce15b3f8131d89563fda36c9b"
	I0526 21:25:59.867730  527485 command_runner.go:124] >       ],
	I0526 21:25:59.867736  527485 command_runner.go:124] >       "size": "29362302",
	I0526 21:25:59.867740  527485 command_runner.go:124] >       "uid": {
	I0526 21:25:59.867746  527485 command_runner.go:124] >         "value": "0"
	I0526 21:25:59.867749  527485 command_runner.go:124] >       },
	I0526 21:25:59.867753  527485 command_runner.go:124] >       "username": "",
	I0526 21:25:59.867757  527485 command_runner.go:124] >       "spec": null
	I0526 21:25:59.867760  527485 command_runner.go:124] >     },
	I0526 21:25:59.867764  527485 command_runner.go:124] >     {
	I0526 21:25:59.867770  527485 command_runner.go:124] >       "id": "sha256:43154ddb57a83de3068fe603e9c7393e7d2b77cb18d9e0daf869f74b1b4079c0",
	I0526 21:25:59.867775  527485 command_runner.go:124] >       "repoTags": [
	I0526 21:25:59.867779  527485 command_runner.go:124] >         "k8s.gcr.io/kube-proxy:v1.20.2"
	I0526 21:25:59.867782  527485 command_runner.go:124] >       ],
	I0526 21:25:59.867786  527485 command_runner.go:124] >       "repoDigests": [
	I0526 21:25:59.867793  527485 command_runner.go:124] >         "k8s.gcr.io/kube-proxy@sha256:326fe8a4508a5db91cf234c4867eff5ba458bc4107c2a7e15c827a74faa19be9"
	I0526 21:25:59.867796  527485 command_runner.go:124] >       ],
	I0526 21:25:59.867800  527485 command_runner.go:124] >       "size": "49539606",
	I0526 21:25:59.867804  527485 command_runner.go:124] >       "uid": null,
	I0526 21:25:59.867808  527485 command_runner.go:124] >       "username": "",
	I0526 21:25:59.867812  527485 command_runner.go:124] >       "spec": null
	I0526 21:25:59.867815  527485 command_runner.go:124] >     },
	I0526 21:25:59.867818  527485 command_runner.go:124] >     {
	I0526 21:25:59.867828  527485 command_runner.go:124] >       "id": "sha256:ed2c44fbdd78b69a0981ab3c57ebce2798e4a4b2b5dda2fabc720f9957d4869f",
	I0526 21:25:59.867833  527485 command_runner.go:124] >       "repoTags": [
	I0526 21:25:59.867842  527485 command_runner.go:124] >         "k8s.gcr.io/kube-scheduler:v1.20.2"
	I0526 21:25:59.867845  527485 command_runner.go:124] >       ],
	I0526 21:25:59.867849  527485 command_runner.go:124] >       "repoDigests": [
	I0526 21:25:59.867857  527485 command_runner.go:124] >         "k8s.gcr.io/kube-scheduler@sha256:304b3d70497bd62498f19f82f9ef164d38948e5ae94966690abfe9d1858867e2"
	I0526 21:25:59.867860  527485 command_runner.go:124] >       ],
	I0526 21:25:59.867864  527485 command_runner.go:124] >       "size": "14012937",
	I0526 21:25:59.867868  527485 command_runner.go:124] >       "uid": {
	I0526 21:25:59.867874  527485 command_runner.go:124] >         "value": "0"
	I0526 21:25:59.867877  527485 command_runner.go:124] >       },
	I0526 21:25:59.867881  527485 command_runner.go:124] >       "username": "",
	I0526 21:25:59.867885  527485 command_runner.go:124] >       "spec": null
	I0526 21:25:59.867888  527485 command_runner.go:124] >     },
	I0526 21:25:59.867892  527485 command_runner.go:124] >     {
	I0526 21:25:59.867898  527485 command_runner.go:124] >       "id": "sha256:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c",
	I0526 21:25:59.867902  527485 command_runner.go:124] >       "repoTags": [
	I0526 21:25:59.867910  527485 command_runner.go:124] >         "k8s.gcr.io/pause:3.2"
	I0526 21:25:59.867913  527485 command_runner.go:124] >       ],
	I0526 21:25:59.867917  527485 command_runner.go:124] >       "repoDigests": [
	I0526 21:25:59.867926  527485 command_runner.go:124] >         "k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f"
	I0526 21:25:59.867929  527485 command_runner.go:124] >       ],
	I0526 21:25:59.867934  527485 command_runner.go:124] >       "size": "299513",
	I0526 21:25:59.867937  527485 command_runner.go:124] >       "uid": null,
	I0526 21:25:59.867941  527485 command_runner.go:124] >       "username": "",
	I0526 21:25:59.867945  527485 command_runner.go:124] >       "spec": null
	I0526 21:25:59.867949  527485 command_runner.go:124] >     }
	I0526 21:25:59.867952  527485 command_runner.go:124] >   ]
	I0526 21:25:59.867955  527485 command_runner.go:124] > }
	I0526 21:25:59.868129  527485 containerd.go:566] couldn't find preloaded image for "docker.io/minikube-local-cache-test:functional-20210526211257-510955". assuming images are not preloaded.
	I0526 21:25:59.868140  527485 cache_images.go:78] LoadImages start: [minikube-local-cache-test:functional-20210526211257-510955]
	I0526 21:25:59.868189  527485 image.go:162] retrieving image: minikube-local-cache-test:functional-20210526211257-510955
	I0526 21:25:59.868201  527485 image.go:168] checking repository: index.docker.io/library/minikube-local-cache-test
	W0526 21:25:59.927015  527485 image.go:175] remote: HEAD https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210526211257-510955: unexpected status code 401 Unauthorized (HEAD responses have no body, use GET for details)
	I0526 21:25:59.927045  527485 image.go:176] short name: minikube-local-cache-test:functional-20210526211257-510955
	I0526 21:25:59.928079  527485 image.go:204] daemon lookup for minikube-local-cache-test:functional-20210526211257-510955: Error response from daemon: reference does not exist
	W0526 21:25:59.966822  527485 image.go:214] authn lookup for minikube-local-cache-test:functional-20210526211257-510955 (trying anon): GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210526211257-510955: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]]
	I0526 21:25:59.994816  527485 command_runner.go:124] > [preflight] Running pre-flight checks
	I0526 21:26:00.009941  527485 image.go:218] remote lookup for minikube-local-cache-test:functional-20210526211257-510955: GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210526211257-510955: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]]
	I0526 21:26:00.009973  527485 image.go:95] error retrieve Image minikube-local-cache-test:functional-20210526211257-510955 ref GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210526211257-510955: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]] 
	I0526 21:26:00.009996  527485 cache_images.go:106] "minikube-local-cache-test:functional-20210526211257-510955" needs transfer: got empty img digest "" for minikube-local-cache-test:functional-20210526211257-510955
	I0526 21:26:00.010026  527485 cri.go:205] Removing image: minikube-local-cache-test:functional-20210526211257-510955
	I0526 21:26:00.010094  527485 ssh_runner.go:149] Run: which crictl
	I0526 21:26:00.310955  527485 command_runner.go:124] > [preflight] Reading configuration from the cluster...
	I0526 21:26:00.310992  527485 command_runner.go:124] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0526 21:26:00.345407  527485 command_runner.go:124] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0526 21:26:00.345844  527485 command_runner.go:124] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0526 21:26:00.345868  527485 command_runner.go:124] > [kubelet-start] Starting the kubelet
	I0526 21:26:00.486355  527485 command_runner.go:124] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0526 21:26:07.017586  527485 command_runner.go:124] > This node has joined the cluster:
	I0526 21:26:07.017615  527485 command_runner.go:124] > * Certificate signing request was sent to apiserver and a response was received.
	I0526 21:26:07.017622  527485 command_runner.go:124] > * The Kubelet was informed of the new secure connection details.
	I0526 21:26:07.017629  527485 command_runner.go:124] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0526 21:26:07.019048  527485 command_runner.go:124] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0526 21:26:07.019107  527485 command_runner.go:124] > /bin/crictl
	I0526 21:26:07.019143  527485 ssh_runner.go:189] Completed: which crictl: (7.009033477s)
	I0526 21:26:07.019205  527485 ssh_runner.go:149] Run: sudo /bin/crictl rmi minikube-local-cache-test:functional-20210526211257-510955
	I0526 21:26:07.019308  527485 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm join control-plane.minikube.internal:8443 --token ch1ot4.9etgzhm4zh9wn897     --discovery-token-ca-cert-hash sha256:12858510f46d14420576d9acdde7779529e8255fb2d74cf18105715622c3cace --ignore-preflight-errors=all --cri-socket /run/containerd/containerd.sock --node-name=multinode-20210526212238-510955-m02": (7.152324551s)
	I0526 21:26:07.019338  527485 ssh_runner.go:149] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0526 21:26:07.298667  527485 command_runner.go:124] > Deleted: docker.io/library/minikube-local-cache-test:functional-20210526211257-510955
	I0526 21:26:07.298756  527485 cache_images.go:279] Loading image from: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/minikube-local-cache-test_functional-20210526211257-510955
	I0526 21:26:07.298800  527485 vm_assets.go:98] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/minikube-local-cache-test_functional-20210526211257-510955 -> /var/lib/minikube/images/minikube-local-cache-test_functional-20210526211257-510955
	I0526 21:26:07.298804  527485 command_runner.go:124] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0526 21:26:07.298834  527485 start.go:226] JoinCluster complete in 7.68039068s
	I0526 21:26:07.298848  527485 cni.go:93] Creating CNI manager for ""
	I0526 21:26:07.298854  527485 cni.go:154] 2 nodes found, recommending kindnet
	I0526 21:26:07.298881  527485 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/minikube-local-cache-test_functional-20210526211257-510955
	I0526 21:26:07.298894  527485 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0526 21:26:07.305231  527485 command_runner.go:124] >   File: /opt/cni/bin/portmap
	I0526 21:26:07.305252  527485 command_runner.go:124] >   Size: 2849304   	Blocks: 5568       IO Block: 4096   regular file
	I0526 21:26:07.305259  527485 command_runner.go:124] > Device: 10h/16d	Inode: 23213       Links: 1
	I0526 21:26:07.305266  527485 command_runner.go:124] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0526 21:26:07.305272  527485 command_runner.go:124] > Access: 2021-05-26 21:22:53.150354389 +0000
	I0526 21:26:07.305278  527485 command_runner.go:124] > Modify: 2021-05-05 21:33:55.000000000 +0000
	I0526 21:26:07.305283  527485 command_runner.go:124] > Change: 2021-05-26 21:22:48.920437741 +0000
	I0526 21:26:07.305286  527485 command_runner.go:124] >  Birth: -
	I0526 21:26:07.305559  527485 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.20.2/kubectl ...
	I0526 21:26:07.305579  527485 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0526 21:26:07.308129  527485 command_runner.go:124] > 5120 2021-05-26 21:15:56.088554954 +0000
	I0526 21:26:07.308647  527485 ssh_runner.go:310] copy: skipping /var/lib/minikube/images/minikube-local-cache-test_functional-20210526211257-510955 (exists)
	I0526 21:26:07.308661  527485 containerd.go:260] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210526211257-510955
	I0526 21:26:07.308705  527485 ssh_runner.go:149] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/minikube-local-cache-test_functional-20210526211257-510955
	I0526 21:26:07.321322  527485 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0526 21:26:07.561843  527485 command_runner.go:124] > unpacking docker.io/library/minikube-local-cache-test:functional-20210526211257-510955 (sha256:d8b8bd0a35bb7de49f0a81841d103dd430b2bd6e4ca4d65facee12d3e0605733)...done
	I0526 21:26:07.563879  527485 cache_images.go:308] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/minikube-local-cache-test_functional-20210526211257-510955 from cache
	I0526 21:26:07.563918  527485 cache_images.go:113] Successfully loaded all cached images
	I0526 21:26:07.563926  527485 cache_images.go:82] LoadImages completed in 7.695778796s
	I0526 21:26:07.564252  527485 main.go:128] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0526 21:26:07.564291  527485 main.go:128] libmachine: Launching plugin server for driver kvm2
	I0526 21:26:07.575574  527485 main.go:128] libmachine: Plugin server listening at address 127.0.0.1:45723
	I0526 21:26:07.576036  527485 main.go:128] libmachine: () Calling .GetVersion
	I0526 21:26:07.576536  527485 main.go:128] libmachine: Using API Version  1
	I0526 21:26:07.576562  527485 main.go:128] libmachine: () Calling .SetConfigRaw
	I0526 21:26:07.576967  527485 main.go:128] libmachine: () Calling .GetMachineName
	I0526 21:26:07.577142  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Calling .GetState
	I0526 21:26:07.580730  527485 main.go:128] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0526 21:26:07.580782  527485 main.go:128] libmachine: Launching plugin server for driver kvm2
	I0526 21:26:07.592963  527485 main.go:128] libmachine: Plugin server listening at address 127.0.0.1:43919
	I0526 21:26:07.593471  527485 main.go:128] libmachine: () Calling .GetVersion
	I0526 21:26:07.594036  527485 main.go:128] libmachine: Using API Version  1
	I0526 21:26:07.594068  527485 main.go:128] libmachine: () Calling .SetConfigRaw
	I0526 21:26:07.594465  527485 main.go:128] libmachine: () Calling .GetMachineName
	I0526 21:26:07.594646  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Calling .DriverName
	I0526 21:26:07.594895  527485 ssh_runner.go:149] Run: sudo crictl images --output json
	I0526 21:26:07.594929  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Calling .GetSSHHostname
	I0526 21:26:07.601623  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | domain multinode-20210526212238-510955-m02 has defined MAC address 52:54:00:9f:f1:a0 in network mk-multinode-20210526212238-510955
	I0526 21:26:07.602019  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:f1:a0", ip: ""} in network mk-multinode-20210526212238-510955: {Iface:virbr1 ExpiryTime:2021-05-26 22:25:34 +0000 UTC Type:0 Mac:52:54:00:9f:f1:a0 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:multinode-20210526212238-510955-m02 Clientid:01:52:54:00:9f:f1:a0}
	I0526 21:26:07.602056  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | domain multinode-20210526212238-510955-m02 has defined IP address 192.168.39.87 and MAC address 52:54:00:9f:f1:a0 in network mk-multinode-20210526212238-510955
	I0526 21:26:07.602144  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Calling .GetSSHPort
	I0526 21:26:07.602316  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Calling .GetSSHKeyPath
	I0526 21:26:07.602462  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Calling .GetSSHUsername
	I0526 21:26:07.602655  527485 sshutil.go:53] new ssh client: &{IP:192.168.39.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/multinode-20210526212238-510955-m02/id_rsa Username:docker}
	I0526 21:26:07.742042  527485 command_runner.go:124] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0526 21:26:07.742085  527485 command_runner.go:124] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0526 21:26:07.742095  527485 command_runner.go:124] > serviceaccount/kindnet unchanged
	I0526 21:26:07.742102  527485 command_runner.go:124] > daemonset.apps/kindnet configured
	I0526 21:26:07.742151  527485 start.go:209] Will wait 6m0s for node &{Name:m02 IP:192.168.39.87 Port:0 KubernetesVersion:v1.20.2 ControlPlane:false Worker:true}
	I0526 21:26:07.742174  527485 command_runner.go:124] > {
	I0526 21:26:07.742192  527485 command_runner.go:124] >   "images": [
	I0526 21:26:07.742199  527485 command_runner.go:124] >     {
	I0526 21:26:07.744012  527485 out.go:170] * Verifying Kubernetes components...
	I0526 21:26:07.742212  527485 command_runner.go:124] >       "id": "sha256:6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb",
	I0526 21:26:07.744108  527485 command_runner.go:124] >       "repoTags": [
	I0526 21:26:07.744122  527485 command_runner.go:124] >         "docker.io/kindest/kindnetd:v20210326-1e038dc5"
	I0526 21:26:07.744127  527485 command_runner.go:124] >       ],
	I0526 21:26:07.744131  527485 command_runner.go:124] >       "repoDigests": [
	I0526 21:26:07.744140  527485 command_runner.go:124] >         "docker.io/kindest/kindnetd@sha256:838bc1706e38391aefaa31fd52619fe8e57ad3dfb0d0ff414d902367fcc24c3c"
	I0526 21:26:07.744157  527485 command_runner.go:124] >       ],
	I0526 21:26:07.744168  527485 command_runner.go:124] >       "size": "53960776",
	I0526 21:26:07.744090  527485 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0526 21:26:07.744175  527485 command_runner.go:124] >       "uid": null,
	I0526 21:26:07.744260  527485 command_runner.go:124] >       "username": "",
	I0526 21:26:07.744268  527485 command_runner.go:124] >       "spec": null
	I0526 21:26:07.744275  527485 command_runner.go:124] >     },
	I0526 21:26:07.744280  527485 command_runner.go:124] >     {
	I0526 21:26:07.744299  527485 command_runner.go:124] >       "id": "sha256:9a07b5b4bfac07e5cfc27f76c34516a3ad2fdfa3f683f375141fe662ef2e72db",
	I0526 21:26:07.744309  527485 command_runner.go:124] >       "repoTags": [
	I0526 21:26:07.744318  527485 command_runner.go:124] >         "docker.io/kubernetesui/dashboard:v2.1.0"
	I0526 21:26:07.744326  527485 command_runner.go:124] >       ],
	I0526 21:26:07.744333  527485 command_runner.go:124] >       "repoDigests": [
	I0526 21:26:07.744347  527485 command_runner.go:124] >         "docker.io/kubernetesui/dashboard@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6"
	I0526 21:26:07.744355  527485 command_runner.go:124] >       ],
	I0526 21:26:07.744362  527485 command_runner.go:124] >       "size": "67992170",
	I0526 21:26:07.744368  527485 command_runner.go:124] >       "uid": null,
	I0526 21:26:07.744379  527485 command_runner.go:124] >       "username": "nonroot",
	I0526 21:26:07.744388  527485 command_runner.go:124] >       "spec": null
	I0526 21:26:07.744393  527485 command_runner.go:124] >     },
	I0526 21:26:07.744398  527485 command_runner.go:124] >     {
	I0526 21:26:07.744409  527485 command_runner.go:124] >       "id": "sha256:86262685d9abb35698a4e03ed13f9ded5b97c6c85b466285e4f367e5232eeee4",
	I0526 21:26:07.744418  527485 command_runner.go:124] >       "repoTags": [
	I0526 21:26:07.744426  527485 command_runner.go:124] >         "docker.io/kubernetesui/metrics-scraper:v1.0.4"
	I0526 21:26:07.744433  527485 command_runner.go:124] >       ],
	I0526 21:26:07.744447  527485 command_runner.go:124] >       "repoDigests": [
	I0526 21:26:07.744462  527485 command_runner.go:124] >         "docker.io/kubernetesui/metrics-scraper@sha256:555981a24f184420f3be0c79d4efb6c948a85cfce84034f85a563f4151a81cbf"
	I0526 21:26:07.744469  527485 command_runner.go:124] >       ],
	I0526 21:26:07.744475  527485 command_runner.go:124] >       "size": "16020077",
	I0526 21:26:07.744481  527485 command_runner.go:124] >       "uid": null,
	I0526 21:26:07.744487  527485 command_runner.go:124] >       "username": "",
	I0526 21:26:07.744494  527485 command_runner.go:124] >       "spec": null
	I0526 21:26:07.744499  527485 command_runner.go:124] >     },
	I0526 21:26:07.744506  527485 command_runner.go:124] >     {
	I0526 21:26:07.744516  527485 command_runner.go:124] >       "id": "sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0526 21:26:07.744525  527485 command_runner.go:124] >       "repoTags": [
	I0526 21:26:07.744533  527485 command_runner.go:124] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0526 21:26:07.744539  527485 command_runner.go:124] >       ],
	I0526 21:26:07.744545  527485 command_runner.go:124] >       "repoDigests": [
	I0526 21:26:07.744559  527485 command_runner.go:124] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I0526 21:26:07.744567  527485 command_runner.go:124] >       ],
	I0526 21:26:07.744573  527485 command_runner.go:124] >       "size": "9058936",
	I0526 21:26:07.744581  527485 command_runner.go:124] >       "uid": null,
	I0526 21:26:07.744587  527485 command_runner.go:124] >       "username": "",
	I0526 21:26:07.744594  527485 command_runner.go:124] >       "spec": null
	I0526 21:26:07.744599  527485 command_runner.go:124] >     },
	I0526 21:26:07.744605  527485 command_runner.go:124] >     {
	I0526 21:26:07.744615  527485 command_runner.go:124] >       "id": "sha256:bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16",
	I0526 21:26:07.744625  527485 command_runner.go:124] >       "repoTags": [
	I0526 21:26:07.744632  527485 command_runner.go:124] >         "k8s.gcr.io/coredns:1.7.0"
	I0526 21:26:07.744640  527485 command_runner.go:124] >       ],
	I0526 21:26:07.744646  527485 command_runner.go:124] >       "repoDigests": [
	I0526 21:26:07.744659  527485 command_runner.go:124] >         "k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c"
	I0526 21:26:07.744667  527485 command_runner.go:124] >       ],
	I0526 21:26:07.744673  527485 command_runner.go:124] >       "size": "13982350",
	I0526 21:26:07.744680  527485 command_runner.go:124] >       "uid": null,
	I0526 21:26:07.744689  527485 command_runner.go:124] >       "username": "",
	I0526 21:26:07.744698  527485 command_runner.go:124] >       "spec": null
	I0526 21:26:07.744703  527485 command_runner.go:124] >     },
	I0526 21:26:07.744716  527485 command_runner.go:124] >     {
	I0526 21:26:07.744727  527485 command_runner.go:124] >       "id": "sha256:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934",
	I0526 21:26:07.744734  527485 command_runner.go:124] >       "repoTags": [
	I0526 21:26:07.744741  527485 command_runner.go:124] >         "k8s.gcr.io/etcd:3.4.13-0"
	I0526 21:26:07.744746  527485 command_runner.go:124] >       ],
	I0526 21:26:07.744756  527485 command_runner.go:124] >       "repoDigests": [
	I0526 21:26:07.744767  527485 command_runner.go:124] >         "k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2"
	I0526 21:26:07.744772  527485 command_runner.go:124] >       ],
	I0526 21:26:07.744779  527485 command_runner.go:124] >       "size": "86742272",
	I0526 21:26:07.744786  527485 command_runner.go:124] >       "uid": null,
	I0526 21:26:07.744793  527485 command_runner.go:124] >       "username": "",
	I0526 21:26:07.744800  527485 command_runner.go:124] >       "spec": null
	I0526 21:26:07.744805  527485 command_runner.go:124] >     },
	I0526 21:26:07.744811  527485 command_runner.go:124] >     {
	I0526 21:26:07.744827  527485 command_runner.go:124] >       "id": "sha256:a8c2fdb8bf76e3b014d14ce69a6a2d11044cb13b4ec3185015c582b8ad69a820",
	I0526 21:26:07.744838  527485 command_runner.go:124] >       "repoTags": [
	I0526 21:26:07.744846  527485 command_runner.go:124] >         "k8s.gcr.io/kube-apiserver:v1.20.2"
	I0526 21:26:07.744851  527485 command_runner.go:124] >       ],
	I0526 21:26:07.744857  527485 command_runner.go:124] >       "repoDigests": [
	I0526 21:26:07.744886  527485 command_runner.go:124] >         "k8s.gcr.io/kube-apiserver@sha256:465ba895d578fbc1c6e299e45689381fd01c54400beba9e8f1d7456077411411"
	I0526 21:26:07.744894  527485 command_runner.go:124] >       ],
	I0526 21:26:07.744901  527485 command_runner.go:124] >       "size": "30411317",
	I0526 21:26:07.744908  527485 command_runner.go:124] >       "uid": {
	I0526 21:26:07.744914  527485 command_runner.go:124] >         "value": "0"
	I0526 21:26:07.744920  527485 command_runner.go:124] >       },
	I0526 21:26:07.744926  527485 command_runner.go:124] >       "username": "",
	I0526 21:26:07.744934  527485 command_runner.go:124] >       "spec": null
	I0526 21:26:07.744939  527485 command_runner.go:124] >     },
	I0526 21:26:07.744946  527485 command_runner.go:124] >     {
	I0526 21:26:07.744959  527485 command_runner.go:124] >       "id": "sha256:a27166429d98e07152ca71420931142127609f715925b1607acee6ea6f0e3696",
	I0526 21:26:07.744965  527485 command_runner.go:124] >       "repoTags": [
	I0526 21:26:07.744976  527485 command_runner.go:124] >         "k8s.gcr.io/kube-controller-manager:v1.20.2"
	I0526 21:26:07.744981  527485 command_runner.go:124] >       ],
	I0526 21:26:07.744988  527485 command_runner.go:124] >       "repoDigests": [
	I0526 21:26:07.744999  527485 command_runner.go:124] >         "k8s.gcr.io/kube-controller-manager@sha256:842a071d4ad49b0018f7f7404ac8a4ddfc2bce2ce15b3f8131d89563fda36c9b"
	I0526 21:26:07.745006  527485 command_runner.go:124] >       ],
	I0526 21:26:07.745013  527485 command_runner.go:124] >       "size": "29362302",
	I0526 21:26:07.745019  527485 command_runner.go:124] >       "uid": {
	I0526 21:26:07.745024  527485 command_runner.go:124] >         "value": "0"
	I0526 21:26:07.745031  527485 command_runner.go:124] >       },
	I0526 21:26:07.745037  527485 command_runner.go:124] >       "username": "",
	I0526 21:26:07.745045  527485 command_runner.go:124] >       "spec": null
	I0526 21:26:07.745050  527485 command_runner.go:124] >     },
	I0526 21:26:07.745057  527485 command_runner.go:124] >     {
	I0526 21:26:07.745067  527485 command_runner.go:124] >       "id": "sha256:43154ddb57a83de3068fe603e9c7393e7d2b77cb18d9e0daf869f74b1b4079c0",
	I0526 21:26:07.745076  527485 command_runner.go:124] >       "repoTags": [
	I0526 21:26:07.745083  527485 command_runner.go:124] >         "k8s.gcr.io/kube-proxy:v1.20.2"
	I0526 21:26:07.745091  527485 command_runner.go:124] >       ],
	I0526 21:26:07.745098  527485 command_runner.go:124] >       "repoDigests": [
	I0526 21:26:07.745111  527485 command_runner.go:124] >         "k8s.gcr.io/kube-proxy@sha256:326fe8a4508a5db91cf234c4867eff5ba458bc4107c2a7e15c827a74faa19be9"
	I0526 21:26:07.745118  527485 command_runner.go:124] >       ],
	I0526 21:26:07.745124  527485 command_runner.go:124] >       "size": "49539606",
	I0526 21:26:07.745132  527485 command_runner.go:124] >       "uid": null,
	I0526 21:26:07.745137  527485 command_runner.go:124] >       "username": "",
	I0526 21:26:07.745144  527485 command_runner.go:124] >       "spec": null
	I0526 21:26:07.745149  527485 command_runner.go:124] >     },
	I0526 21:26:07.745157  527485 command_runner.go:124] >     {
	I0526 21:26:07.745167  527485 command_runner.go:124] >       "id": "sha256:ed2c44fbdd78b69a0981ab3c57ebce2798e4a4b2b5dda2fabc720f9957d4869f",
	I0526 21:26:07.745177  527485 command_runner.go:124] >       "repoTags": [
	I0526 21:26:07.745185  527485 command_runner.go:124] >         "k8s.gcr.io/kube-scheduler:v1.20.2"
	I0526 21:26:07.745193  527485 command_runner.go:124] >       ],
	I0526 21:26:07.745199  527485 command_runner.go:124] >       "repoDigests": [
	I0526 21:26:07.745210  527485 command_runner.go:124] >         "k8s.gcr.io/kube-scheduler@sha256:304b3d70497bd62498f19f82f9ef164d38948e5ae94966690abfe9d1858867e2"
	I0526 21:26:07.745218  527485 command_runner.go:124] >       ],
	I0526 21:26:07.745224  527485 command_runner.go:124] >       "size": "14012937",
	I0526 21:26:07.745232  527485 command_runner.go:124] >       "uid": {
	I0526 21:26:07.745238  527485 command_runner.go:124] >         "value": "0"
	I0526 21:26:07.745245  527485 command_runner.go:124] >       },
	I0526 21:26:07.745256  527485 command_runner.go:124] >       "username": "",
	I0526 21:26:07.745265  527485 command_runner.go:124] >       "spec": null
	I0526 21:26:07.745270  527485 command_runner.go:124] >     },
	I0526 21:26:07.745277  527485 command_runner.go:124] >     {
	I0526 21:26:07.745294  527485 command_runner.go:124] >       "id": "sha256:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c",
	I0526 21:26:07.745303  527485 command_runner.go:124] >       "repoTags": [
	I0526 21:26:07.745309  527485 command_runner.go:124] >         "k8s.gcr.io/pause:3.2"
	I0526 21:26:07.745319  527485 command_runner.go:124] >       ],
	I0526 21:26:07.745326  527485 command_runner.go:124] >       "repoDigests": [
	I0526 21:26:07.745338  527485 command_runner.go:124] >         "k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f"
	I0526 21:26:07.745344  527485 command_runner.go:124] >       ],
	I0526 21:26:07.745350  527485 command_runner.go:124] >       "size": "299513",
	I0526 21:26:07.745356  527485 command_runner.go:124] >       "uid": null,
	I0526 21:26:07.745364  527485 command_runner.go:124] >       "username": "",
	I0526 21:26:07.745370  527485 command_runner.go:124] >       "spec": null
	I0526 21:26:07.745374  527485 command_runner.go:124] >     }
	I0526 21:26:07.745379  527485 command_runner.go:124] >   ]
	I0526 21:26:07.745383  527485 command_runner.go:124] > }
	I0526 21:26:07.745562  527485 containerd.go:566] couldn't find preloaded image for "docker.io/minikube-local-cache-test:functional-20210526211257-510955". assuming images are not preloaded.
	I0526 21:26:07.745581  527485 cache_images.go:78] LoadImages start: [minikube-local-cache-test:functional-20210526211257-510955]
	I0526 21:26:07.745632  527485 image.go:162] retrieving image: minikube-local-cache-test:functional-20210526211257-510955
	I0526 21:26:07.745650  527485 image.go:168] checking repository: index.docker.io/library/minikube-local-cache-test
	I0526 21:26:07.770156  527485 loader.go:379] Config loaded from file:  /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/kubeconfig
	I0526 21:26:07.770880  527485 kapi.go:59] client config for multinode-20210526212238-510955: &rest.Config{Host:"https://192.168.39.229:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/multinode-20210526212238-510955/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/multinode-2
0210526212238-510955/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16ac600), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0526 21:26:07.773470  527485 node_ready.go:35] waiting up to 6m0s for node "multinode-20210526212238-510955-m02" to be "Ready" ...
	I0526 21:26:07.773560  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955-m02
	I0526 21:26:07.773573  527485 round_trippers.go:429] Request Headers:
	I0526 21:26:07.773580  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:26:07.773589  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:26:07.780507  527485 round_trippers.go:448] Response Status: 200 OK in 6 milliseconds
	I0526 21:26:07.780522  527485 round_trippers.go:451] Response Headers:
	I0526 21:26:07.780527  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:26:07.780532  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:26:07.780536  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:26:07.780540  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:26:07.780544  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:26:07 GMT
	I0526 21:26:07.781445  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955-m02","uid":"67a94a89-b2ce-441e-8b6d-a1198d6b46b1","resourceVersion":"617","creationTimestamp":"2021-05-26T21:26:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:26:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","t
ime":"2021-05-26T21:26:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta [truncated 5546 chars]
	W0526 21:26:07.798802  527485 image.go:175] remote: HEAD https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210526211257-510955: unexpected status code 401 Unauthorized (HEAD responses have no body, use GET for details)
	I0526 21:26:07.798833  527485 image.go:176] short name: minikube-local-cache-test:functional-20210526211257-510955
	I0526 21:26:07.802331  527485 image.go:204] daemon lookup for minikube-local-cache-test:functional-20210526211257-510955: Error response from daemon: reference does not exist
	W0526 21:26:07.846923  527485 image.go:214] authn lookup for minikube-local-cache-test:functional-20210526211257-510955 (trying anon): GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210526211257-510955: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]]
	I0526 21:26:07.894841  527485 image.go:218] remote lookup for minikube-local-cache-test:functional-20210526211257-510955: GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210526211257-510955: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]]
	I0526 21:26:07.894894  527485 image.go:95] error retrieve Image minikube-local-cache-test:functional-20210526211257-510955 ref GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210526211257-510955: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]] 
	I0526 21:26:07.894924  527485 cache_images.go:106] "minikube-local-cache-test:functional-20210526211257-510955" needs transfer: got empty img digest "" for minikube-local-cache-test:functional-20210526211257-510955
	I0526 21:26:07.894963  527485 cri.go:205] Removing image: minikube-local-cache-test:functional-20210526211257-510955
	I0526 21:26:07.895010  527485 ssh_runner.go:149] Run: which crictl
	I0526 21:26:07.900977  527485 command_runner.go:124] > /bin/crictl
	I0526 21:26:07.901235  527485 ssh_runner.go:149] Run: sudo /bin/crictl rmi minikube-local-cache-test:functional-20210526211257-510955
	I0526 21:26:07.925276  527485 command_runner.go:124] ! time="2021-05-26T21:26:07Z" level=error msg="no such image minikube-local-cache-test:functional-20210526211257-510955"
	I0526 21:26:07.925308  527485 command_runner.go:124] ! time="2021-05-26T21:26:07Z" level=fatal msg="unable to remove the image(s)"
	I0526 21:26:07.925350  527485 cache_images.go:279] Loading image from: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/minikube-local-cache-test_functional-20210526211257-510955
	I0526 21:26:07.925385  527485 vm_assets.go:98] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/minikube-local-cache-test_functional-20210526211257-510955 -> /var/lib/minikube/images/minikube-local-cache-test_functional-20210526211257-510955
	I0526 21:26:07.925452  527485 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/minikube-local-cache-test_functional-20210526211257-510955
	I0526 21:26:07.938254  527485 command_runner.go:124] ! stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210526211257-510955': No such file or directory
	I0526 21:26:07.938651  527485 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210526211257-510955: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/minikube-local-cache-test_functional-20210526211257-510955: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210526211257-510955': No such file or directory
	I0526 21:26:07.938689  527485 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/minikube-local-cache-test_functional-20210526211257-510955 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210526211257-510955 (5120 bytes)
	I0526 21:26:07.958227  527485 containerd.go:260] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210526211257-510955
	I0526 21:26:07.958318  527485 ssh_runner.go:149] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/minikube-local-cache-test_functional-20210526211257-510955
	I0526 21:26:08.222264  527485 command_runner.go:124] > unpacking docker.io/library/minikube-local-cache-test:functional-20210526211257-510955 (sha256:d8b8bd0a35bb7de49f0a81841d103dd430b2bd6e4ca4d65facee12d3e0605733)...done
	I0526 21:26:08.225698  527485 cache_images.go:308] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/minikube-local-cache-test_functional-20210526211257-510955 from cache
	I0526 21:26:08.225738  527485 cache_images.go:113] Successfully loaded all cached images
	I0526 21:26:08.225749  527485 cache_images.go:82] LoadImages completed in 480.158269ms
	I0526 21:26:08.225768  527485 cache_images.go:252] succeeded pushing to: multinode-20210526212238-510955 multinode-20210526212238-510955-m02
	I0526 21:26:08.225775  527485 cache_images.go:253] failed pushing to: 
	I0526 21:26:08.225807  527485 main.go:128] libmachine: Making call to close driver server
	I0526 21:26:08.225824  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .Close
	I0526 21:26:08.226096  527485 main.go:128] libmachine: Successfully made call to close driver server
	I0526 21:26:08.226140  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | Closing plugin on server side
	I0526 21:26:08.226143  527485 main.go:128] libmachine: Making call to close connection to plugin binary
	I0526 21:26:08.226208  527485 main.go:128] libmachine: Making call to close driver server
	I0526 21:26:08.226218  527485 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .Close
	I0526 21:26:08.226430  527485 main.go:128] libmachine: Successfully made call to close driver server
	I0526 21:26:08.226473  527485 main.go:128] libmachine: Making call to close connection to plugin binary
	I0526 21:26:08.226488  527485 main.go:128] libmachine: Making call to close driver server
	I0526 21:26:08.226499  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Calling .Close
	I0526 21:26:08.226457  527485 main.go:128] libmachine: (multinode-20210526212238-510955) DBG | Closing plugin on server side
	I0526 21:26:08.227680  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) DBG | Closing plugin on server side
	I0526 21:26:08.227720  527485 main.go:128] libmachine: Successfully made call to close driver server
	I0526 21:26:08.227733  527485 main.go:128] libmachine: Making call to close connection to plugin binary
	I0526 21:26:08.227744  527485 main.go:128] libmachine: Making call to close driver server
	I0526 21:26:08.227756  527485 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Calling .Close
	I0526 21:26:08.227952  527485 main.go:128] libmachine: Successfully made call to close driver server
	I0526 21:26:08.227964  527485 main.go:128] libmachine: Making call to close connection to plugin binary
	I0526 21:26:08.282181  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955-m02
	I0526 21:26:08.282200  527485 round_trippers.go:429] Request Headers:
	I0526 21:26:08.282205  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:26:08.282213  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:26:08.284910  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:26:08.284928  527485 round_trippers.go:451] Response Headers:
	I0526 21:26:08.284935  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:26:08.284941  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:26:08.284945  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:26:08 GMT
	I0526 21:26:08.284949  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:26:08.284954  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:26:08.285156  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955-m02","uid":"67a94a89-b2ce-441e-8b6d-a1198d6b46b1","resourceVersion":"617","creationTimestamp":"2021-05-26T21:26:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:26:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","t
ime":"2021-05-26T21:26:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta [truncated 5546 chars]
	I0526 21:26:08.782297  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955-m02
	I0526 21:26:08.782321  527485 round_trippers.go:429] Request Headers:
	I0526 21:26:08.782327  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:26:08.782330  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:26:08.785128  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:26:08.785141  527485 round_trippers.go:451] Response Headers:
	I0526 21:26:08.785145  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:26:08.785148  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:26:08.785153  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:26:08.785156  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:26:08 GMT
	I0526 21:26:08.785163  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:26:08.785383  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955-m02","uid":"67a94a89-b2ce-441e-8b6d-a1198d6b46b1","resourceVersion":"617","creationTimestamp":"2021-05-26T21:26:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:26:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","t
ime":"2021-05-26T21:26:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta [truncated 5546 chars]
	I0526 21:26:09.282567  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955-m02
	I0526 21:26:09.282601  527485 round_trippers.go:429] Request Headers:
	I0526 21:26:09.282609  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:26:09.282616  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:26:09.285756  527485 round_trippers.go:448] Response Status: 200 OK in 3 milliseconds
	I0526 21:26:09.285781  527485 round_trippers.go:451] Response Headers:
	I0526 21:26:09.285787  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:26:09.285793  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:26:09.285798  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:26:09.285803  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:26:09.285807  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:26:09 GMT
	I0526 21:26:09.286343  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955-m02","uid":"67a94a89-b2ce-441e-8b6d-a1198d6b46b1","resourceVersion":"617","creationTimestamp":"2021-05-26T21:26:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:26:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","t
ime":"2021-05-26T21:26:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta [truncated 5546 chars]
	I0526 21:26:09.782429  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955-m02
	I0526 21:26:09.782459  527485 round_trippers.go:429] Request Headers:
	I0526 21:26:09.782467  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:26:09.782471  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:26:09.785669  527485 round_trippers.go:448] Response Status: 200 OK in 3 milliseconds
	I0526 21:26:09.785690  527485 round_trippers.go:451] Response Headers:
	I0526 21:26:09.785694  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:26:09.785697  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:26:09.785700  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:26:09.785703  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:26:09.785706  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:26:09 GMT
	I0526 21:26:09.785974  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955-m02","uid":"67a94a89-b2ce-441e-8b6d-a1198d6b46b1","resourceVersion":"617","creationTimestamp":"2021-05-26T21:26:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:26:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","t
ime":"2021-05-26T21:26:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta [truncated 5546 chars]
	I0526 21:26:09.786210  527485 node_ready.go:58] node "multinode-20210526212238-510955-m02" has status "Ready":"False"
	I0526 21:26:10.282294  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955-m02
	I0526 21:26:10.282323  527485 round_trippers.go:429] Request Headers:
	I0526 21:26:10.282328  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:26:10.282332  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:26:10.287758  527485 round_trippers.go:448] Response Status: 200 OK in 5 milliseconds
	I0526 21:26:10.287777  527485 round_trippers.go:451] Response Headers:
	I0526 21:26:10.287783  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:26:10.287788  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:26:10 GMT
	I0526 21:26:10.287792  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:26:10.287796  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:26:10.287800  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:26:10.288064  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955-m02","uid":"67a94a89-b2ce-441e-8b6d-a1198d6b46b1","resourceVersion":"617","creationTimestamp":"2021-05-26T21:26:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:26:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","t
ime":"2021-05-26T21:26:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta [truncated 5546 chars]
	I0526 21:26:10.781956  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955-m02
	I0526 21:26:10.781982  527485 round_trippers.go:429] Request Headers:
	I0526 21:26:10.781987  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:26:10.781992  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:26:10.785443  527485 round_trippers.go:448] Response Status: 200 OK in 3 milliseconds
	I0526 21:26:10.785462  527485 round_trippers.go:451] Response Headers:
	I0526 21:26:10.785467  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:26:10.785473  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:26:10 GMT
	I0526 21:26:10.785477  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:26:10.785481  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:26:10.785485  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:26:10.785778  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955-m02","uid":"67a94a89-b2ce-441e-8b6d-a1198d6b46b1","resourceVersion":"617","creationTimestamp":"2021-05-26T21:26:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:26:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","t
ime":"2021-05-26T21:26:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta [truncated 5546 chars]
	I0526 21:26:11.282455  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955-m02
	I0526 21:26:11.282481  527485 round_trippers.go:429] Request Headers:
	I0526 21:26:11.282486  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:26:11.282490  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:26:11.285437  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:26:11.285458  527485 round_trippers.go:451] Response Headers:
	I0526 21:26:11.285465  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:26:11.285470  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:26:11.285472  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:26:11.285475  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:26:11.285478  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:26:11 GMT
	I0526 21:26:11.286729  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955-m02","uid":"67a94a89-b2ce-441e-8b6d-a1198d6b46b1","resourceVersion":"617","creationTimestamp":"2021-05-26T21:26:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:26:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","t
ime":"2021-05-26T21:26:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta [truncated 5546 chars]
	I0526 21:26:11.782509  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955-m02
	I0526 21:26:11.782536  527485 round_trippers.go:429] Request Headers:
	I0526 21:26:11.782541  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:26:11.782545  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:26:11.785153  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:26:11.785171  527485 round_trippers.go:451] Response Headers:
	I0526 21:26:11.785175  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:26:11.785179  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:26:11.785181  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:26:11.785184  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:26:11.785187  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:26:11 GMT
	I0526 21:26:11.785305  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955-m02","uid":"67a94a89-b2ce-441e-8b6d-a1198d6b46b1","resourceVersion":"617","creationTimestamp":"2021-05-26T21:26:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:26:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","t
ime":"2021-05-26T21:26:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta [truncated 5546 chars]
	I0526 21:26:12.282135  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955-m02
	I0526 21:26:12.282160  527485 round_trippers.go:429] Request Headers:
	I0526 21:26:12.282166  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:26:12.282170  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:26:12.284771  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:26:12.284787  527485 round_trippers.go:451] Response Headers:
	I0526 21:26:12.284798  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:26:12.284801  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:26:12.284806  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:26:12.284809  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:26:12.284882  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:26:12 GMT
	I0526 21:26:12.284996  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955-m02","uid":"67a94a89-b2ce-441e-8b6d-a1198d6b46b1","resourceVersion":"617","creationTimestamp":"2021-05-26T21:26:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:26:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","t
ime":"2021-05-26T21:26:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta [truncated 5546 chars]
	I0526 21:26:12.285281  527485 node_ready.go:58] node "multinode-20210526212238-510955-m02" has status "Ready":"False"
	I0526 21:26:12.782057  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955-m02
	I0526 21:26:12.782080  527485 round_trippers.go:429] Request Headers:
	I0526 21:26:12.782085  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:26:12.782089  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:26:12.784478  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:26:12.784501  527485 round_trippers.go:451] Response Headers:
	I0526 21:26:12.784507  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:26:12 GMT
	I0526 21:26:12.784514  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:26:12.784518  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:26:12.784527  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:26:12.784532  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:26:12.784737  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955-m02","uid":"67a94a89-b2ce-441e-8b6d-a1198d6b46b1","resourceVersion":"617","creationTimestamp":"2021-05-26T21:26:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:26:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","t
ime":"2021-05-26T21:26:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta [truncated 5546 chars]
	I0526 21:26:13.282533  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955-m02
	I0526 21:26:13.282565  527485 round_trippers.go:429] Request Headers:
	I0526 21:26:13.282576  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:26:13.282582  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:26:13.286379  527485 round_trippers.go:448] Response Status: 200 OK in 3 milliseconds
	I0526 21:26:13.286395  527485 round_trippers.go:451] Response Headers:
	I0526 21:26:13.286399  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:26:13.286403  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:26:13 GMT
	I0526 21:26:13.286406  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:26:13.286408  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:26:13.286411  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:26:13.286978  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955-m02","uid":"67a94a89-b2ce-441e-8b6d-a1198d6b46b1","resourceVersion":"617","creationTimestamp":"2021-05-26T21:26:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:26:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","t
ime":"2021-05-26T21:26:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta [truncated 5546 chars]
	I0526 21:26:13.782029  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955-m02
	I0526 21:26:13.782051  527485 round_trippers.go:429] Request Headers:
	I0526 21:26:13.782057  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:26:13.782061  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:26:13.785869  527485 round_trippers.go:448] Response Status: 200 OK in 3 milliseconds
	I0526 21:26:13.785888  527485 round_trippers.go:451] Response Headers:
	I0526 21:26:13.785893  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:26:13.785896  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:26:13.785899  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:26:13.785902  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:26:13.785905  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:26:13 GMT
	I0526 21:26:13.786356  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955-m02","uid":"67a94a89-b2ce-441e-8b6d-a1198d6b46b1","resourceVersion":"617","creationTimestamp":"2021-05-26T21:26:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:26:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","t
ime":"2021-05-26T21:26:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta [truncated 5546 chars]
	I0526 21:26:14.282505  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955-m02
	I0526 21:26:14.282530  527485 round_trippers.go:429] Request Headers:
	I0526 21:26:14.282536  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:26:14.282540  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:26:14.285358  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:26:14.285374  527485 round_trippers.go:451] Response Headers:
	I0526 21:26:14.285378  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:26:14.285381  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:26:14.285384  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:26:14.285387  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:26:14.285390  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:26:14 GMT
	I0526 21:26:14.285904  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955-m02","uid":"67a94a89-b2ce-441e-8b6d-a1198d6b46b1","resourceVersion":"617","creationTimestamp":"2021-05-26T21:26:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:26:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","t
ime":"2021-05-26T21:26:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta [truncated 5546 chars]
	I0526 21:26:14.286147  527485 node_ready.go:58] node "multinode-20210526212238-510955-m02" has status "Ready":"False"
	I0526 21:26:14.782213  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955-m02
	I0526 21:26:14.782251  527485 round_trippers.go:429] Request Headers:
	I0526 21:26:14.782264  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:26:14.782275  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:26:14.785317  527485 round_trippers.go:448] Response Status: 200 OK in 3 milliseconds
	I0526 21:26:14.785338  527485 round_trippers.go:451] Response Headers:
	I0526 21:26:14.785343  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:26:14 GMT
	I0526 21:26:14.785348  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:26:14.785352  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:26:14.785357  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:26:14.785360  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:26:14.785780  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955-m02","uid":"67a94a89-b2ce-441e-8b6d-a1198d6b46b1","resourceVersion":"617","creationTimestamp":"2021-05-26T21:26:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:26:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","t
ime":"2021-05-26T21:26:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta [truncated 5546 chars]
	I0526 21:26:15.281994  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955-m02
	I0526 21:26:15.282021  527485 round_trippers.go:429] Request Headers:
	I0526 21:26:15.282026  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:26:15.282030  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:26:15.284857  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:26:15.284896  527485 round_trippers.go:451] Response Headers:
	I0526 21:26:15.284901  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:26:15.284906  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:26:15.284910  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:26:15.284914  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:26:15.284918  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:26:15 GMT
	I0526 21:26:15.285547  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955-m02","uid":"67a94a89-b2ce-441e-8b6d-a1198d6b46b1","resourceVersion":"617","creationTimestamp":"2021-05-26T21:26:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:26:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","t
ime":"2021-05-26T21:26:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta [truncated 5546 chars]
	I0526 21:26:15.782602  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955-m02
	I0526 21:26:15.782662  527485 round_trippers.go:429] Request Headers:
	I0526 21:26:15.782681  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:26:15.782697  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:26:15.785867  527485 round_trippers.go:448] Response Status: 200 OK in 3 milliseconds
	I0526 21:26:15.785881  527485 round_trippers.go:451] Response Headers:
	I0526 21:26:15.785886  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:26:15.785891  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:26:15.785900  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:26:15.785905  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:26:15.785910  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:26:15 GMT
	I0526 21:26:15.786228  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955-m02","uid":"67a94a89-b2ce-441e-8b6d-a1198d6b46b1","resourceVersion":"617","creationTimestamp":"2021-05-26T21:26:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:26:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","t
ime":"2021-05-26T21:26:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta [truncated 5546 chars]
	I0526 21:26:16.282045  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955-m02
	I0526 21:26:16.282068  527485 round_trippers.go:429] Request Headers:
	I0526 21:26:16.282074  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:26:16.282078  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:26:16.284876  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:26:16.284896  527485 round_trippers.go:451] Response Headers:
	I0526 21:26:16.284901  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:26:16.284905  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:26:16.284910  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:26:16.284914  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:26:16.284919  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:26:16 GMT
	I0526 21:26:16.285029  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955-m02","uid":"67a94a89-b2ce-441e-8b6d-a1198d6b46b1","resourceVersion":"617","creationTimestamp":"2021-05-26T21:26:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:26:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","t
ime":"2021-05-26T21:26:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta [truncated 5546 chars]
	I0526 21:26:16.782674  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955-m02
	I0526 21:26:16.782700  527485 round_trippers.go:429] Request Headers:
	I0526 21:26:16.782705  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:26:16.782709  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:26:16.787282  527485 round_trippers.go:448] Response Status: 200 OK in 4 milliseconds
	I0526 21:26:16.787296  527485 round_trippers.go:451] Response Headers:
	I0526 21:26:16.787302  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:26:16.787306  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:26:16.787311  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:26:16.787316  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:26:16.787320  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:26:16 GMT
	I0526 21:26:16.787682  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955-m02","uid":"67a94a89-b2ce-441e-8b6d-a1198d6b46b1","resourceVersion":"643","creationTimestamp":"2021-05-26T21:26:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:26:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","t
ime":"2021-05-26T21:26:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta [truncated 5328 chars]
	I0526 21:26:16.788003  527485 node_ready.go:49] node "multinode-20210526212238-510955-m02" has status "Ready":"True"
	I0526 21:26:16.788033  527485 node_ready.go:38] duration metric: took 9.014539124s waiting for node "multinode-20210526212238-510955-m02" to be "Ready" ...
	I0526 21:26:16.788053  527485 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0526 21:26:16.788131  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods
	I0526 21:26:16.788143  527485 round_trippers.go:429] Request Headers:
	I0526 21:26:16.788150  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:26:16.788155  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:26:16.791856  527485 round_trippers.go:448] Response Status: 200 OK in 3 milliseconds
	I0526 21:26:16.791870  527485 round_trippers.go:451] Response Headers:
	I0526 21:26:16.791873  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:26:16 GMT
	I0526 21:26:16.791876  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:26:16.791879  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:26:16.791882  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:26:16.791884  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:26:16.794499  527485 request.go:1107] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"643"},"items":[{"metadata":{"name":"coredns-74ff55c5b-tw67b","generateName":"coredns-74ff55c5b-","namespace":"kube-system","uid":"a0522c32-9960-4c21-8a5a-d0b137009166","resourceVersion":"500","creationTimestamp":"2021-05-26T21:23:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"74ff55c5b"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-74ff55c5b","uid":"605a8716-bb95-4dc9-a518-ec6d2a3d080e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"605a8716-bb95-4dc9-a518-ec6d2a3d080e\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},
"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"n [truncated 66009 chars]
	I0526 21:26:16.796026  527485 pod_ready.go:78] waiting up to 6m0s for pod "coredns-74ff55c5b-tw67b" in "kube-system" namespace to be "Ready" ...
	I0526 21:26:16.796089  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/coredns-74ff55c5b-tw67b
	I0526 21:26:16.796100  527485 round_trippers.go:429] Request Headers:
	I0526 21:26:16.796106  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:26:16.796110  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:26:16.798364  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:26:16.798377  527485 round_trippers.go:451] Response Headers:
	I0526 21:26:16.798381  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:26:16.798385  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:26:16.798387  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:26:16.798391  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:26:16.798396  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:26:16 GMT
	I0526 21:26:16.798787  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-74ff55c5b-tw67b","generateName":"coredns-74ff55c5b-","namespace":"kube-system","uid":"a0522c32-9960-4c21-8a5a-d0b137009166","resourceVersion":"500","creationTimestamp":"2021-05-26T21:23:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"74ff55c5b"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-74ff55c5b","uid":"605a8716-bb95-4dc9-a518-ec6d2a3d080e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"605a8716-bb95-4dc9-a518-ec6d2a3d080e\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:
containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{ [truncated 5780 chars]
	I0526 21:26:16.799179  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:26:16.799196  527485 round_trippers.go:429] Request Headers:
	I0526 21:26:16.799202  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:26:16.799208  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:26:16.801151  527485 round_trippers.go:448] Response Status: 200 OK in 1 milliseconds
	I0526 21:26:16.801169  527485 round_trippers.go:451] Response Headers:
	I0526 21:26:16.801175  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:26:16 GMT
	I0526 21:26:16.801182  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:26:16.801193  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:26:16.801198  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:26:16.801207  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:26:16.801323  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:26:16.801577  527485 pod_ready.go:92] pod "coredns-74ff55c5b-tw67b" in "kube-system" namespace has status "Ready":"True"
	I0526 21:26:16.801590  527485 pod_ready.go:81] duration metric: took 5.537684ms waiting for pod "coredns-74ff55c5b-tw67b" in "kube-system" namespace to be "Ready" ...
	I0526 21:26:16.801598  527485 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-20210526212238-510955" in "kube-system" namespace to be "Ready" ...
	I0526 21:26:16.801646  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210526212238-510955
	I0526 21:26:16.801657  527485 round_trippers.go:429] Request Headers:
	I0526 21:26:16.801663  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:26:16.801669  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:26:16.804138  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:26:16.804148  527485 round_trippers.go:451] Response Headers:
	I0526 21:26:16.804155  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:26:16.804160  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:26:16.804166  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:26:16.804171  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:26:16.804175  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:26:16 GMT
	I0526 21:26:16.804609  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210526212238-510955","namespace":"kube-system","uid":"6e073b61-d86c-4e7a-a1ad-aa5daefd710b","resourceVersion":"539","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.mirror":"34530b4d5ce1b17919f3b8976b2d0456","kubernetes.io/config.seen":"2021-05-26T21:23:43.638982161Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:25:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5642 chars]
	I0526 21:26:16.804940  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:26:16.804955  527485 round_trippers.go:429] Request Headers:
	I0526 21:26:16.804961  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:26:16.804967  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:26:16.807074  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:26:16.807112  527485 round_trippers.go:451] Response Headers:
	I0526 21:26:16.807123  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:26:16.807127  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:26:16.807132  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:26:16.807137  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:26:16.807142  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:26:16 GMT
	I0526 21:26:16.807917  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:26:16.808139  527485 pod_ready.go:92] pod "etcd-multinode-20210526212238-510955" in "kube-system" namespace has status "Ready":"True"
	I0526 21:26:16.808152  527485 pod_ready.go:81] duration metric: took 6.548202ms waiting for pod "etcd-multinode-20210526212238-510955" in "kube-system" namespace to be "Ready" ...
	I0526 21:26:16.808170  527485 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-20210526212238-510955" in "kube-system" namespace to be "Ready" ...
	I0526 21:26:16.808219  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-20210526212238-510955
	I0526 21:26:16.808228  527485 round_trippers.go:429] Request Headers:
	I0526 21:26:16.808235  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:26:16.808242  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:26:16.810336  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:26:16.810352  527485 round_trippers.go:451] Response Headers:
	I0526 21:26:16.810357  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:26:16 GMT
	I0526 21:26:16.810361  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:26:16.810365  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:26:16.810370  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:26:16.810374  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:26:16.810791  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-20210526212238-510955","namespace":"kube-system","uid":"5d446255-3487-4319-9b9f-2294a93fd226","resourceVersion":"447","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.229:8443","kubernetes.io/config.hash":"b42b6879229f245abab6047de8662a2f","kubernetes.io/config.mirror":"b42b6879229f245abab6047de8662a2f","kubernetes.io/config.seen":"2021-05-26T21:23:43.638984722Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:kubeadm.kubernetes.io/kube-apiserver.advertise-addr [truncated 7266 chars]
	I0526 21:26:16.811070  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:26:16.811083  527485 round_trippers.go:429] Request Headers:
	I0526 21:26:16.811092  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:26:16.811098  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:26:16.813937  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:26:16.813950  527485 round_trippers.go:451] Response Headers:
	I0526 21:26:16.813955  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:26:16.813959  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:26:16.813963  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:26:16.813968  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:26:16 GMT
	I0526 21:26:16.813973  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:26:16.814281  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:26:16.814488  527485 pod_ready.go:92] pod "kube-apiserver-multinode-20210526212238-510955" in "kube-system" namespace has status "Ready":"True"
	I0526 21:26:16.814499  527485 pod_ready.go:81] duration metric: took 6.318765ms waiting for pod "kube-apiserver-multinode-20210526212238-510955" in "kube-system" namespace to be "Ready" ...
	I0526 21:26:16.814510  527485 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-20210526212238-510955" in "kube-system" namespace to be "Ready" ...
	I0526 21:26:16.814550  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20210526212238-510955
	I0526 21:26:16.814560  527485 round_trippers.go:429] Request Headers:
	I0526 21:26:16.814566  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:26:16.814572  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:26:16.818018  527485 round_trippers.go:448] Response Status: 200 OK in 3 milliseconds
	I0526 21:26:16.818030  527485 round_trippers.go:451] Response Headers:
	I0526 21:26:16.818034  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:26:16.818037  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:26:16.818040  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:26:16 GMT
	I0526 21:26:16.818043  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:26:16.818047  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:26:16.818941  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-20210526212238-510955","namespace":"kube-system","uid":"ff663293-6f11-48e7-9409-1637114dc587","resourceVersion":"546","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"474c55dfb64741cc485e46b6bb9f2dc0","kubernetes.io/config.mirror":"474c55dfb64741cc485e46b6bb9f2dc0","kubernetes.io/config.seen":"2021-05-26T21:23:43.638987620Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:25:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con
fig.mirror":{},"f:kubernetes.io/config.seen":{},"f:kubernetes.io/config [truncated 6822 chars]
	I0526 21:26:16.819293  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:26:16.819317  527485 round_trippers.go:429] Request Headers:
	I0526 21:26:16.819325  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:26:16.819333  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:26:16.822487  527485 round_trippers.go:448] Response Status: 200 OK in 3 milliseconds
	I0526 21:26:16.822498  527485 round_trippers.go:451] Response Headers:
	I0526 21:26:16.822503  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:26:16 GMT
	I0526 21:26:16.822507  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:26:16.822511  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:26:16.822516  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:26:16.822521  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:26:16.823589  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:26:16.823790  527485 pod_ready.go:92] pod "kube-controller-manager-multinode-20210526212238-510955" in "kube-system" namespace has status "Ready":"True"
	I0526 21:26:16.823802  527485 pod_ready.go:81] duration metric: took 9.28412ms waiting for pod "kube-controller-manager-multinode-20210526212238-510955" in "kube-system" namespace to be "Ready" ...
	I0526 21:26:16.823812  527485 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-q7l2f" in "kube-system" namespace to be "Ready" ...
	I0526 21:26:16.983206  527485 request.go:591] Throttling request took 159.360803ms, request: GET:https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/kube-proxy-q7l2f
	I0526 21:26:16.983247  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/kube-proxy-q7l2f
	I0526 21:26:16.983252  527485 round_trippers.go:429] Request Headers:
	I0526 21:26:16.983257  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:26:16.983262  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:26:16.986689  527485 round_trippers.go:448] Response Status: 200 OK in 3 milliseconds
	I0526 21:26:16.986703  527485 round_trippers.go:451] Response Headers:
	I0526 21:26:16.986708  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:26:16.986712  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:26:16.986717  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:26:16.986721  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:26:16.986725  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:26:16 GMT
	I0526 21:26:16.987176  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-q7l2f","generateName":"kube-proxy-","namespace":"kube-system","uid":"8e75477a-14d2-46d9-8fa8-32dd3a2a4fc4","resourceVersion":"628","creationTimestamp":"2021-05-26T21:26:06Z","labels":{"controller-revision-hash":"b89db7f56","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"59f7a309-d89a-4050-8e82-fc8da888387f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:26:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"59f7a309-d89a-4050-8e82-fc8da888387f\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller"
:{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:affinity":{".":{ [truncated 5533 chars]
	I0526 21:26:17.182698  527485 request.go:591] Throttling request took 195.251879ms, request: GET:https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955-m02
	I0526 21:26:17.182829  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955-m02
	I0526 21:26:17.182846  527485 round_trippers.go:429] Request Headers:
	I0526 21:26:17.182858  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:26:17.182868  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:26:17.185282  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:26:17.185306  527485 round_trippers.go:451] Response Headers:
	I0526 21:26:17.185310  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:26:17.185314  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:26:17.185317  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:26:17.185321  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:26:17 GMT
	I0526 21:26:17.185353  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:26:17.185464  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955-m02","uid":"67a94a89-b2ce-441e-8b6d-a1198d6b46b1","resourceVersion":"643","creationTimestamp":"2021-05-26T21:26:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:26:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","t
ime":"2021-05-26T21:26:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta [truncated 5328 chars]
	I0526 21:26:17.185655  527485 pod_ready.go:92] pod "kube-proxy-q7l2f" in "kube-system" namespace has status "Ready":"True"
	I0526 21:26:17.185665  527485 pod_ready.go:81] duration metric: took 361.847259ms waiting for pod "kube-proxy-q7l2f" in "kube-system" namespace to be "Ready" ...
	I0526 21:26:17.185673  527485 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qbl42" in "kube-system" namespace to be "Ready" ...
	I0526 21:26:17.383075  527485 request.go:591] Throttling request took 197.367812ms, request: GET:https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qbl42
	I0526 21:26:17.383116  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qbl42
	I0526 21:26:17.383123  527485 round_trippers.go:429] Request Headers:
	I0526 21:26:17.383127  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:26:17.383144  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:26:17.385686  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:26:17.385701  527485 round_trippers.go:451] Response Headers:
	I0526 21:26:17.385706  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:26:17.385711  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:26:17 GMT
	I0526 21:26:17.385715  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:26:17.385719  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:26:17.385724  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:26:17.386210  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-qbl42","generateName":"kube-proxy-","namespace":"kube-system","uid":"950a915d-c5f0-4e6f-bc12-ee97013032f0","resourceVersion":"453","creationTimestamp":"2021-05-26T21:23:53Z","labels":{"controller-revision-hash":"b89db7f56","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"59f7a309-d89a-4050-8e82-fc8da888387f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:23:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"59f7a309-d89a-4050-8e82-fc8da888387f\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller"
:{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:affinity":{".":{ [truncated 5529 chars]
	I0526 21:26:17.582744  527485 request.go:591] Throttling request took 196.212706ms, request: GET:https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:26:17.582878  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:26:17.582918  527485 round_trippers.go:429] Request Headers:
	I0526 21:26:17.582938  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:26:17.582957  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:26:17.586038  527485 round_trippers.go:448] Response Status: 200 OK in 3 milliseconds
	I0526 21:26:17.586053  527485 round_trippers.go:451] Response Headers:
	I0526 21:26:17.586059  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:26:17.586064  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:26:17.586068  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:26:17.586072  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:26:17.586077  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:26:17 GMT
	I0526 21:26:17.586421  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:26:17.586719  527485 pod_ready.go:92] pod "kube-proxy-qbl42" in "kube-system" namespace has status "Ready":"True"
	I0526 21:26:17.586735  527485 pod_ready.go:81] duration metric: took 401.054991ms waiting for pod "kube-proxy-qbl42" in "kube-system" namespace to be "Ready" ...
	I0526 21:26:17.586747  527485 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-20210526212238-510955" in "kube-system" namespace to be "Ready" ...
	I0526 21:26:17.783036  527485 request.go:591] Throttling request took 196.229128ms, request: GET:https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-20210526212238-510955
	I0526 21:26:17.783077  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-20210526212238-510955
	I0526 21:26:17.783082  527485 round_trippers.go:429] Request Headers:
	I0526 21:26:17.783086  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:26:17.783091  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:26:17.785449  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:26:17.785465  527485 round_trippers.go:451] Response Headers:
	I0526 21:26:17.785474  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:26:17 GMT
	I0526 21:26:17.785480  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:26:17.785485  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:26:17.785491  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:26:17.785496  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:26:17.785637  527485 request.go:1107] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-20210526212238-510955","namespace":"kube-system","uid":"66bb91fe-7af2-400f-a477-fe2dc3428e83","resourceVersion":"547","creationTimestamp":"2021-05-26T21:23:44Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"6b4a0ee8b3d15a1c2e47c15d32e6eb0d","kubernetes.io/config.mirror":"6b4a0ee8b3d15a1c2e47c15d32e6eb0d","kubernetes.io/config.seen":"2021-05-26T21:23:43.638976446Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T21:25:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:
kubernetes.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:la [truncated 4552 chars]
	I0526 21:26:17.983216  527485 request.go:591] Throttling request took 197.353257ms, request: GET:https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:26:17.983271  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes/multinode-20210526212238-510955
	I0526 21:26:17.983278  527485 round_trippers.go:429] Request Headers:
	I0526 21:26:17.983287  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:26:17.983295  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:26:17.986203  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:26:17.986220  527485 round_trippers.go:451] Response Headers:
	I0526 21:26:17.986226  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:26:17.986231  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:26:17 GMT
	I0526 21:26:17.986236  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:26:17.986241  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:26:17.986245  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:26:17.986391  527485 request.go:1107] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1",
"time":"2021-05-26T21:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:me [truncated 6102 chars]
	I0526 21:26:17.986648  527485 pod_ready.go:92] pod "kube-scheduler-multinode-20210526212238-510955" in "kube-system" namespace has status "Ready":"True"
	I0526 21:26:17.986659  527485 pod_ready.go:81] duration metric: took 399.904203ms waiting for pod "kube-scheduler-multinode-20210526212238-510955" in "kube-system" namespace to be "Ready" ...
	I0526 21:26:17.986668  527485 pod_ready.go:38] duration metric: took 1.198598504s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0526 21:26:17.986690  527485 system_svc.go:44] waiting for kubelet service to be running ....
	I0526 21:26:17.986746  527485 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0526 21:26:17.997734  527485 system_svc.go:56] duration metric: took 11.038645ms WaitForService to wait for kubelet.
	I0526 21:26:17.997761  527485 kubeadm.go:547] duration metric: took 10.255571644s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0526 21:26:17.997798  527485 node_conditions.go:102] verifying NodePressure condition ...
	I0526 21:26:18.183268  527485 request.go:591] Throttling request took 185.408975ms, request: GET:https://192.168.39.229:8443/api/v1/nodes
	I0526 21:26:18.183312  527485 round_trippers.go:422] GET https://192.168.39.229:8443/api/v1/nodes
	I0526 21:26:18.183317  527485 round_trippers.go:429] Request Headers:
	I0526 21:26:18.183324  527485 round_trippers.go:433]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0526 21:26:18.183329  527485 round_trippers.go:433]     Accept: application/json, */*
	I0526 21:26:18.185874  527485 round_trippers.go:448] Response Status: 200 OK in 2 milliseconds
	I0526 21:26:18.185890  527485 round_trippers.go:451] Response Headers:
	I0526 21:26:18.185894  527485 round_trippers.go:454]     Cache-Control: no-cache, private
	I0526 21:26:18.185898  527485 round_trippers.go:454]     Content-Type: application/json
	I0526 21:26:18.185901  527485 round_trippers.go:454]     X-Kubernetes-Pf-Flowschema-Uid: 125a3402-aeff-4587-86fa-93944c68a449
	I0526 21:26:18.185904  527485 round_trippers.go:454]     X-Kubernetes-Pf-Prioritylevel-Uid: 45d05e10-20f0-47e9-9aae-aa04578154ea
	I0526 21:26:18.185908  527485 round_trippers.go:454]     Date: Wed, 26 May 2021 21:26:18 GMT
	I0526 21:26:18.186054  527485 request.go:1107] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"645"},"items":[{"metadata":{"name":"multinode-20210526212238-510955","uid":"3248e838-d041-4e3e-b961-9476bcb6ac55","resourceVersion":"505","creationTimestamp":"2021-05-26T21:23:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210526212238-510955","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1440f8d7119ca73787e7dc88324b0d13449454ff","minikube.k8s.io/name":"multinode-20210526212238-510955","minikube.k8s.io/updated_at":"2021_05_26T21_23_38_0700","minikube.k8s.io/version":"v1.20.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager
":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-05-26T2 [truncated 12475 chars]
	I0526 21:26:18.186429  527485 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0526 21:26:18.186450  527485 node_conditions.go:123] node cpu capacity is 2
	I0526 21:26:18.186463  527485 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0526 21:26:18.186475  527485 node_conditions.go:123] node cpu capacity is 2
	I0526 21:26:18.186482  527485 node_conditions.go:105] duration metric: took 188.674076ms to run NodePressure ...
	I0526 21:26:18.186496  527485 start.go:214] waiting for startup goroutines ...
	I0526 21:26:18.227617  527485 start.go:462] kubectl: 1.20.5, cluster: 1.20.2 (minor skew: 0)
	I0526 21:26:18.230027  527485 out.go:170] * Done! kubectl is now configured to use "multinode-20210526212238-510955" cluster and "default" namespace by default
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	65b1b69ac45a2       8c811b4aec35f       2 minutes ago       Running             busybox                   0                   1072cb707c440
	a9593dff4428d       bfe3a36ebd252       4 minutes ago       Running             coredns                   0                   1d96eb581f035
	5d3df8c94eaed       6e38f40d628db       4 minutes ago       Running             storage-provisioner       0                   722b1b257c571
	69df1859ce4d1       6de166512aa22       5 minutes ago       Running             kindnet-cni               0                   53490c652b9e5
	de6efc6fec4b2       43154ddb57a83       5 minutes ago       Running             kube-proxy                0                   038c42970362d
	c8538106e966b       0369cf4303ffd       5 minutes ago       Running             etcd                      0                   2ad404c6a9c44
	e6bb9bee7539a       ed2c44fbdd78b       5 minutes ago       Running             kube-scheduler            0                   24fd8b8599a6e
	2314e41b1b443       a27166429d98e       5 minutes ago       Running             kube-controller-manager   0                   73ada73fbbf0b
	a0581c0e5409b       a8c2fdb8bf76e       5 minutes ago       Running             kube-apiserver            0                   fe43674906f20
	
	* 
	* ==> containerd <==
	* -- Logs begin at Wed 2021-05-26 21:22:49 UTC, end at Wed 2021-05-26 21:29:00 UTC. --
	May 26 21:26:22 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:26:22.309661198Z" level=info msg="Exec process \"f7f5df022fa6389fc48c539e9c176c4764ca2c7f56c65e77b4e5eef36dbe3de5\" exits with exit code 0 and error <nil>"
	May 26 21:26:22 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:26:22.309692729Z" level=info msg="Finish piping \"stdout\" of container exec \"f7f5df022fa6389fc48c539e9c176c4764ca2c7f56c65e77b4e5eef36dbe3de5\""
	May 26 21:26:22 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:26:22.309725890Z" level=info msg="Finish piping \"stderr\" of container exec \"f7f5df022fa6389fc48c539e9c176c4764ca2c7f56c65e77b4e5eef36dbe3de5\""
	May 26 21:26:22 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:26:22.687053254Z" level=info msg="Exec for \"65b1b69ac45a25fa6e0343c53311b36ad1009ff19ed496df87c4c2cbf14e792c\" with command [nslookup kubernetes.default], tty false and stdin false"
	May 26 21:26:22 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:26:22.687633785Z" level=info msg="Exec for \"65b1b69ac45a25fa6e0343c53311b36ad1009ff19ed496df87c4c2cbf14e792c\" returns URL \"http://192.168.122.92:10010/exec/YMpM3wxp\""
	May 26 21:26:22 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:26:22.764350132Z" level=info msg="Finish piping \"stdout\" of container exec \"f9f4b085c72d5a55e4a968a84b91407c68bf11856e2cc297476dfb714eac4ee2\""
	May 26 21:26:22 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:26:22.764583210Z" level=info msg="Finish piping \"stderr\" of container exec \"f9f4b085c72d5a55e4a968a84b91407c68bf11856e2cc297476dfb714eac4ee2\""
	May 26 21:26:22 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:26:22.765539605Z" level=info msg="Exec process \"f9f4b085c72d5a55e4a968a84b91407c68bf11856e2cc297476dfb714eac4ee2\" exits with exit code 0 and error <nil>"
	May 26 21:26:23 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:26:23.146873242Z" level=info msg="Exec for \"65b1b69ac45a25fa6e0343c53311b36ad1009ff19ed496df87c4c2cbf14e792c\" with command [nslookup kubernetes.default.svc.cluster.local], tty false and stdin false"
	May 26 21:26:23 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:26:23.147013185Z" level=info msg="Exec for \"65b1b69ac45a25fa6e0343c53311b36ad1009ff19ed496df87c4c2cbf14e792c\" returns URL \"http://192.168.122.92:10010/exec/x8dPJXEC\""
	May 26 21:26:23 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:26:23.231284522Z" level=info msg="Exec process \"b548f73424ce2298287dcd89720ef9f6b3bba8f0f2f9492315835164a109ba60\" exits with exit code 0 and error <nil>"
	May 26 21:26:23 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:26:23.232260472Z" level=info msg="Finish piping \"stdout\" of container exec \"b548f73424ce2298287dcd89720ef9f6b3bba8f0f2f9492315835164a109ba60\""
	May 26 21:26:23 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:26:23.232712636Z" level=info msg="Finish piping \"stderr\" of container exec \"b548f73424ce2298287dcd89720ef9f6b3bba8f0f2f9492315835164a109ba60\""
	May 26 21:26:23 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:26:23.731169077Z" level=info msg="Exec for \"65b1b69ac45a25fa6e0343c53311b36ad1009ff19ed496df87c4c2cbf14e792c\" with command [sh -c nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3], tty false and stdin false"
	May 26 21:26:23 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:26:23.731227728Z" level=info msg="Exec for \"65b1b69ac45a25fa6e0343c53311b36ad1009ff19ed496df87c4c2cbf14e792c\" returns URL \"http://192.168.122.92:10010/exec/p47VCeNE\""
	May 26 21:26:23 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:26:23.818927899Z" level=info msg="Exec process \"2492d26147794af576f041ece1f96ff9b4387411e8a7ebe1aebab298c37ba305\" exits with exit code 0 and error <nil>"
	May 26 21:26:23 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:26:23.819225501Z" level=info msg="Finish piping \"stdout\" of container exec \"2492d26147794af576f041ece1f96ff9b4387411e8a7ebe1aebab298c37ba305\""
	May 26 21:26:23 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:26:23.820587364Z" level=info msg="Finish piping \"stderr\" of container exec \"2492d26147794af576f041ece1f96ff9b4387411e8a7ebe1aebab298c37ba305\""
	May 26 21:27:13 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:27:13.755580743Z" level=info msg="RemoveImage \"minikube-local-cache-test:functional-20210526211257-510955\""
	May 26 21:27:13 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:27:13.761220885Z" level=info msg="ImageDelete event &ImageDelete{Name:sha256:d019ff3187ef5660d1df17b8caf469d5fc50b72267134348e040397c4d49d830,XXX_unrecognized:[],}"
	May 26 21:27:13 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:27:13.762973064Z" level=info msg="ImageDelete event &ImageDelete{Name:docker.io/library/minikube-local-cache-test:functional-20210526211257-510955,XXX_unrecognized:[],}"
	May 26 21:27:13 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:27:13.769922070Z" level=info msg="RemoveImage \"minikube-local-cache-test:functional-20210526211257-510955\" returns successfully"
	May 26 21:27:14 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:27:14.144764721Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/library/minikube-local-cache-test:functional-20210526211257-510955,Labels:map[string]string{},XXX_unrecognized:[],}"
	May 26 21:27:14 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:27:14.153374839Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d019ff3187ef5660d1df17b8caf469d5fc50b72267134348e040397c4d49d830,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
	May 26 21:27:14 multinode-20210526212238-510955 containerd[2157]: time="2021-05-26T21:27:14.153849282Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/library/minikube-local-cache-test:functional-20210526211257-510955,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
	
	* 
	* ==> coredns [a9593dff4428d4d0f5cc31832cefb42bf39b62a98d32d295dd9502a1ef6d307a] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = 8f51b271a18f2ce6fcaee5f1cfda3ed0
	CoreDNS-1.7.0
	linux/amd64, go1.14.4, f59c03d
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-20210526212238-510955
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-20210526212238-510955
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1440f8d7119ca73787e7dc88324b0d13449454ff
	                    minikube.k8s.io/name=multinode-20210526212238-510955
	                    minikube.k8s.io/updated_at=2021_05_26T21_23_38_0700
	                    minikube.k8s.io/version=v1.20.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 26 May 2021 21:23:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-20210526212238-510955
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 26 May 2021 21:28:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 26 May 2021 21:27:44 +0000   Wed, 26 May 2021 21:23:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 26 May 2021 21:27:44 +0000   Wed, 26 May 2021 21:23:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 26 May 2021 21:27:44 +0000   Wed, 26 May 2021 21:23:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 26 May 2021 21:27:44 +0000   Wed, 26 May 2021 21:24:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.229
	  Hostname:    multinode-20210526212238-510955
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2186320Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2186320Ki
	  pods:               110
	System Info:
	  Machine ID:                 fbd77f9e2b0d4ce7860fb21881bb7ff3
	  System UUID:                fbd77f9e-2b0d-4ce7-860f-b21881bb7ff3
	  Boot ID:                    9a60591c-de07-4474-bb32-101b0a9643ff
	  Kernel Version:             4.19.182
	  OS Image:                   Buildroot 2020.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.4.4
	  Kubelet Version:            v1.20.2
	  Kube-Proxy Version:         v1.20.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-6cd5ff77cb-4g265                                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m41s
	  kube-system                 coredns-74ff55c5b-tw67b                                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (7%!)(MISSING)     5m7s
	  kube-system                 etcd-multinode-20210526212238-510955                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m16s
	  kube-system                 kindnet-2wgbs                                              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m7s
	  kube-system                 kube-apiserver-multinode-20210526212238-510955             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m16s
	  kube-system                 kube-controller-manager-multinode-20210526212238-510955    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m16s
	  kube-system                 kube-proxy-qbl42                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m7s
	  kube-system                 kube-scheduler-multinode-20210526212238-510955             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m16s
	  kube-system                 storage-provisioner                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  100Mi (0%!)(MISSING)   0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 5m33s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m32s (x4 over 5m33s)  kubelet     Node multinode-20210526212238-510955 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m32s (x3 over 5m33s)  kubelet     Node multinode-20210526212238-510955 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m32s (x3 over 5m33s)  kubelet     Node multinode-20210526212238-510955 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m32s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 5m17s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m16s                  kubelet     Node multinode-20210526212238-510955 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m16s                  kubelet     Node multinode-20210526212238-510955 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m16s                  kubelet     Node multinode-20210526212238-510955 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m16s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 5m6s                   kube-proxy  Starting kube-proxy.
	  Normal  NodeReady                4m56s                  kubelet     Node multinode-20210526212238-510955 status is now: NodeReady
	
	
	Name:               multinode-20210526212238-510955-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-20210526212238-510955-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 26 May 2021 21:26:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-20210526212238-510955-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 26 May 2021 21:28:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 26 May 2021 21:26:36 +0000   Wed, 26 May 2021 21:26:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 26 May 2021 21:26:36 +0000   Wed, 26 May 2021 21:26:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 26 May 2021 21:26:36 +0000   Wed, 26 May 2021 21:26:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 26 May 2021 21:26:36 +0000   Wed, 26 May 2021 21:26:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.87
	  Hostname:    multinode-20210526212238-510955-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2186320Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2186320Ki
	  pods:               110
	System Info:
	  Machine ID:                 8f4ce45cafcc4968b1990f7d389bdc28
	  System UUID:                8f4ce45c-afcc-4968-b199-0f7d389bdc28
	  Boot ID:                    b644d687-3a13-4a74-8cd4-87bdfa46d2ca
	  Kernel Version:             4.19.182
	  OS Image:                   Buildroot 2020.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.4.4
	  Kubelet Version:            v1.20.2
	  Kube-Proxy Version:         v1.20.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-6cd5ff77cb-dlslt    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m41s
	  kube-system                 kindnet-wvlst               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      2m54s
	  kube-system                 kube-proxy-q7l2f            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 2m54s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m54s (x2 over 2m54s)  kubelet     Node multinode-20210526212238-510955-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m54s (x2 over 2m54s)  kubelet     Node multinode-20210526212238-510955-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m54s (x2 over 2m54s)  kubelet     Node multinode-20210526212238-510955-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m54s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 2m52s                  kube-proxy  Starting kube-proxy.
	  Normal  NodeReady                2m44s                  kubelet     Node multinode-20210526212238-510955-m02 status is now: NodeReady
	
	
	Name:               multinode-20210526212238-510955-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-20210526212238-510955-m03
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 26 May 2021 21:27:13 +0000
	Taints:             node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-20210526212238-510955-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 26 May 2021 21:27:23 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Wed, 26 May 2021 21:27:23 +0000   Wed, 26 May 2021 21:28:08 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Wed, 26 May 2021 21:27:23 +0000   Wed, 26 May 2021 21:28:08 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Wed, 26 May 2021 21:27:23 +0000   Wed, 26 May 2021 21:28:08 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Wed, 26 May 2021 21:27:23 +0000   Wed, 26 May 2021 21:28:08 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.18
	  Hostname:    multinode-20210526212238-510955-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2186496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2186496Ki
	  pods:               110
	System Info:
	  Machine ID:                 8af9c1527ba34d8e88af1625a32590a7
	  System UUID:                8af9c152-7ba3-4d8e-88af-1625a32590a7
	  Boot ID:                    23ed8454-1f5f-4bcb-92e6-24b7647d8dac
	  Kernel Version:             4.19.182
	  OS Image:                   Buildroot 2020.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.4.4
	  Kubelet Version:            v1.20.2
	  Kube-Proxy Version:         v1.20.2
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-b75lx       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      107s
	  kube-system                 kube-proxy-ftdx6    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         107s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From        Message
	  ----    ------                   ----                 ----        -------
	  Normal  Starting                 107s                 kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  107s (x2 over 107s)  kubelet     Node multinode-20210526212238-510955-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    107s (x2 over 107s)  kubelet     Node multinode-20210526212238-510955-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     107s (x2 over 107s)  kubelet     Node multinode-20210526212238-510955-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  107s                 kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 105s                 kube-proxy  Starting kube-proxy.
	  Normal  NodeReady                97s                  kubelet     Node multinode-20210526212238-510955-m03 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [May26 21:22] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.092301] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +3.726361] Unstable clock detected, switching default tracing clock to "global"
	              If you want to keep using the local clock, then add:
	                "trace_clock=local"
	              on the kernel command line
	[  +0.000018] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.393840] systemd-fstab-generator[1161]: Ignoring "noauto" for root device
	[  +0.034647] systemd[1]: system-getty.slice: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000003] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +0.775022] SELinux: unrecognized netlink message: protocol=0 nlmsg_type=106 sclass=netlink_route_socket pid=1723 comm=systemd-network
	[  +1.684954] vboxguest: loading out-of-tree module taints kernel.
	[  +0.006011] vboxguest: PCI device not found, probably running on physical hardware.
	[  +1.532510] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack.
	[May26 21:23] systemd-fstab-generator[2097]: Ignoring "noauto" for root device
	[  +0.282151] systemd-fstab-generator[2145]: Ignoring "noauto" for root device
	[  +9.202259] systemd-fstab-generator[2335]: Ignoring "noauto" for root device
	[ +16.373129] systemd-fstab-generator[2754]: Ignoring "noauto" for root device
	[ +16.598445] kauditd_printk_skb: 38 callbacks suppressed
	[May26 21:24] kauditd_printk_skb: 50 callbacks suppressed
	[ +45.152218] NFSD: Unable to end grace period: -110
	
	* 
	* ==> etcd [c8538106e966bebc2d596c1688fdb11c413a37e122bd18c454b4db5bec7f55ad] <==
	* WARNING: 2021/05/26 21:27:02 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2021-05-26 21:27:03.281976 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "error:context deadline exceeded" took too long (2.000031526s) to execute
	2021-05-26 21:27:03.612927 W | wal: sync duration of 1.019180522s, expected less than 1s
	2021-05-26 21:27:03.613353 W | etcdserver: request "header:<ID:7886218195963551091 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/kube-apiserver-multinode-20210526212238-510955.1682baefe39290fe\" mod_revision:582 > success:<request_put:<key:\"/registry/events/kube-system/kube-apiserver-multinode-20210526212238-510955.1682baefe39290fe\" value_size:762 lease:7886218195963551089 >> failure:<request_range:<key:\"/registry/events/kube-system/kube-apiserver-multinode-20210526212238-510955.1682baefe39290fe\" > >>" with result "size:16" took too long (1.160121773s) to execute
	2021-05-26 21:27:03.614951 W | etcdserver: read-only range request "key:\"/registry/poddisruptionbudgets/\" range_end:\"/registry/poddisruptionbudgets0\" count_only:true " with result "range_response_count:0 size:5" took too long (2.525487318s) to execute
	2021-05-26 21:27:03.615643 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (322.673894ms) to execute
	2021-05-26 21:27:03.616035 W | etcdserver: read-only range request "key:\"/registry/jobs/\" range_end:\"/registry/jobs0\" limit:500 " with result "range_response_count:0 size:5" took too long (993.098662ms) to execute
	2021-05-26 21:27:03.616361 W | etcdserver: read-only range request "key:\"/registry/rolebindings/\" range_end:\"/registry/rolebindings0\" count_only:true " with result "range_response_count:0 size:7" took too long (1.443267942s) to execute
	2021-05-26 21:27:03.617223 W | etcdserver: read-only range request "key:\"/registry/clusterrolebindings/\" range_end:\"/registry/clusterrolebindings0\" count_only:true " with result "range_response_count:0 size:7" took too long (1.705993682s) to execute
	2021-05-26 21:27:03.618140 W | etcdserver: read-only range request "key:\"/registry/minions/\" range_end:\"/registry/minions0\" " with result "range_response_count:2 size:11041" took too long (2.129214129s) to execute
	2021-05-26 21:27:05.247138 W | wal: sync duration of 1.601172104s, expected less than 1s
	2021-05-26 21:27:05.334917 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (1.052221969s) to execute
	2021-05-26 21:27:10.917398 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-05-26 21:27:13.651150 W | etcdserver: read-only range request "key:\"/registry/limitranges/kube-system/\" range_end:\"/registry/limitranges/kube-system0\" " with result "range_response_count:0 size:5" took too long (116.920368ms) to execute
	2021-05-26 21:27:13.658995 W | etcdserver: read-only range request "key:\"/registry/limitranges/kube-system/\" range_end:\"/registry/limitranges/kube-system0\" " with result "range_response_count:0 size:5" took too long (122.772657ms) to execute
	2021-05-26 21:27:20.917297 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-05-26 21:27:30.917876 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-05-26 21:27:40.917425 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-05-26 21:27:50.917259 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-05-26 21:28:00.916814 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-05-26 21:28:10.917703 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-05-26 21:28:20.917068 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-05-26 21:28:30.916531 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-05-26 21:28:40.917394 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-05-26 21:28:50.917825 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	* 
	* ==> kernel <==
	*  21:29:00 up 6 min,  0 users,  load average: 0.35, 0.49, 0.25
	Linux multinode-20210526212238-510955 4.19.182 #1 SMP Wed May 5 21:20:39 UTC 2021 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2020.02.12"
	
	* 
	* ==> kube-apiserver [a0581c0e5409bf38a71c61d4cf776a7fcd3dc38c24a5a6a52165552afd2bc85c] <==
	* Trace[554095935]: [3.323753444s] [3.323753444s] END
	I0526 21:27:03.621063       1 trace.go:205] Trace[1564844825]: "Patch" url:/api/v1/namespaces/kube-system/events/kube-apiserver-multinode-20210526212238-510955.1682baefe39290fe,user-agent:kubelet/v1.20.2 (linux/amd64) kubernetes/faecb19,client:192.168.39.229 (26-May-2021 21:27:00.296) (total time: 3324ms):
	Trace[1564844825]: ---"Object stored in database" 3288ms (21:27:00.620)
	Trace[1564844825]: [3.324115066s] [3.324115066s] END
	I0526 21:27:03.622888       1 trace.go:205] Trace[828582385]: "GuaranteedUpdate etcd3" type:*core.Endpoints (26-May-2021 21:27:00.341) (total time: 3281ms):
	Trace[828582385]: ---"Transaction committed" 3281ms (21:27:00.622)
	Trace[828582385]: [3.281692282s] [3.281692282s] END
	I0526 21:27:03.623162       1 trace.go:205] Trace[1847006298]: "Update" url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,client:192.168.39.229 (26-May-2021 21:27:00.340) (total time: 3282ms):
	Trace[1847006298]: ---"Object stored in database" 3281ms (21:27:00.623)
	Trace[1847006298]: [3.282217035s] [3.282217035s] END
	I0526 21:27:03.631154       1 trace.go:205] Trace[2146471206]: "List" url:/api/v1/nodes,user-agent:kindnetd/v0.0.0 (linux/amd64) kubernetes/$Format,client:192.168.39.87 (26-May-2021 21:27:01.487) (total time: 2143ms):
	Trace[2146471206]: ---"Listing from storage done" 2132ms (21:27:00.620)
	Trace[2146471206]: [2.143426392s] [2.143426392s] END
	I0526 21:27:05.336213       1 trace.go:205] Trace[1522091935]: "Create" url:/api/v1/namespaces/kube-system/events,user-agent:kubelet/v1.20.2 (linux/amd64) kubernetes/faecb19,client:192.168.39.229 (26-May-2021 21:27:03.648) (total time: 1687ms):
	Trace[1522091935]: ---"Object stored in database" 1687ms (21:27:00.336)
	Trace[1522091935]: [1.687793823s] [1.687793823s] END
	I0526 21:27:19.889601       1 client.go:360] parsed scheme: "passthrough"
	I0526 21:27:19.889757       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0526 21:27:19.889783       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0526 21:27:58.932349       1 client.go:360] parsed scheme: "passthrough"
	I0526 21:27:58.933036       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0526 21:27:58.933326       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0526 21:28:30.988777       1 client.go:360] parsed scheme: "passthrough"
	I0526 21:28:30.989065       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0526 21:28:30.989089       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	* 
	* ==> kube-controller-manager [2314e41b1b44395e7b5a7c32d6859941d4a5db601171f0dd18ddefb571d25f18] <==
	* I0526 21:23:53.906201       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0526 21:23:53.937294       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0526 21:23:53.937309       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0526 21:24:08.320331       1 node_lifecycle_controller.go:1222] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	W0526 21:26:06.517135       1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-20210526212238-510955-m02" does not exist
	I0526 21:26:06.674802       1 range_allocator.go:373] Set node multinode-20210526212238-510955-m02 PodCIDR to [10.244.1.0/24]
	I0526 21:26:06.700780       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-wvlst"
	I0526 21:26:06.703138       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-q7l2f"
	E0526 21:26:06.758329       1 daemon_controller.go:320] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"59f7a309-d89a-4050-8e82-fc8da888387f", ResourceVersion:"454", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63757661018, loc:(*time.Location)(0x6f31360)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc000d4fde0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc000d4fe00)}, v1.ManagedFieldsEntry{Manager:"kube-co
ntroller-manager", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc000d4fe20), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc000d4fe40)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc000d4fe60), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElastic
BlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc00137d640), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSour
ce)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc000d4fe80), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSo
urce)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc000d4fea0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil),
Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.20.2", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil),
WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc000d4fee0)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"F
ile", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc001a28060), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc000ecae78), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000cd1030), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)
(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc000107048)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc000ecb108)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:1, NumberMisscheduled:0, DesiredNumberScheduled:1, NumberReady:1, ObservedGeneration:1, UpdatedNumberScheduled:1, NumberAvailable:1, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest ve
rsion and try again
	E0526 21:26:06.766354       1 daemon_controller.go:320] kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kindnet", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"c6806fba-0252-46f8-bc69-c8732fdb46d7", ResourceVersion:"472", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63757661018, loc:(*time.Location)(0x6f31360)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"},\"name\":\"kindnet\",\"namespace\":\"kube-system\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"k
indnet\"}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"}},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"HOST_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.hostIP\"}}},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}},{\"name\":\"POD_SUBNET\",\"value\":\"10.244.0.0/16\"}],\"image\":\"kindest/kindnetd:v20210326-1e038dc5\",\"name\":\"kindnet-cni\",\"resources\":{\"limits\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"},\"requests\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"}},\"securityContext\":{\"capabilities\":{\"add\":[\"NET_RAW\",\"NET_ADMIN\"]},\"privileged\":false},\"volumeMounts\":[{\"mountPath\":\"/etc/cni/net.d\",\"name\":\"cni-cfg\"},{\"mountPath\":\"/run/xtables.lock\",\"name\":\"xtables-lock\",\"readOnly\":false},{\"mountPath\":\"/lib/modules\",\"name\":\"lib-modules\",\"readOnly\":true}]}],\"hostNetwork\":true,\"serviceAccountName\":\"kindnet\",\"tolerations\":[{\"effect\":\"NoSchedule\",\"operator\":\"Exists
\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/cni/net.mk\",\"type\":\"DirectoryOrCreate\"},\"name\":\"cni-cfg\"},{\"hostPath\":{\"path\":\"/run/xtables.lock\",\"type\":\"FileOrCreate\"},\"name\":\"xtables-lock\"},{\"hostPath\":{\"path\":\"/lib/modules\"},\"name\":\"lib-modules\"}]}}}}\n"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubectl-client-side-apply", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc000e377a0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc000e377c0)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc000e377e0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc000e37800)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc000e37820), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, Crea
tionTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"cni-cfg", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc000e37840), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.Flex
VolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc000e37860), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVo
lumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CS
IVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc000e37880), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*
v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kindnet-cni", Image:"kindest/kindnetd:v20210326-1e038dc5", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"HOST_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc000e378a0)}, v1.EnvVar{Name:"POD_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc000e378e0)}, v1.EnvVar{Name:"POD_SUBNET", Value:"10.244.0.0/16", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amou
nt{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"cni-cfg", ReadOnly:false, MountPath:"/etc/cni/net.d", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropa
gation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc001abaa80), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc000f870e8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"kindnet", DeprecatedServiceAccount:"kindnet", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000d00af0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(ni
l), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc000362bf8)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc000f87130)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:1, NumberMisscheduled:0, DesiredNumberScheduled:1, NumberReady:1, ObservedGeneration:1, UpdatedNumberScheduled:1, NumberAvailable:1, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetConditio
n(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kindnet": the object has been modified; please apply your changes to the latest version and try again
	E0526 21:26:06.798644       1 daemon_controller.go:320] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"59f7a309-d89a-4050-8e82-fc8da888387f", ResourceVersion:"605", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63757661018, loc:(*time.Location)(0x6f31360)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc000e36dc0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc000e36de0)}, v1.ManagedFieldsEntry{Manager:"kube-co
ntroller-manager", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc000e36e00), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc000e36e20)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc000e36e40), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElastic
BlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc0011a2dc0), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSour
ce)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc000e36e60), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSo
urce)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc000e36e80), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil),
Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.20.2", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil),
WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc000e36ec0)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"F
ile", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc001aba900), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc000f87c68), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000cc1f80), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)
(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc000a85510)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc000f87cb8)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:1, NumberMisscheduled:0, DesiredNumberScheduled:2, NumberReady:1, ObservedGeneration:1, UpdatedNumberScheduled:1, NumberAvailable:1, NumberUnavailable:1, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest ve
rsion and try again
	W0526 21:26:08.334957       1 node_lifecycle_controller.go:1044] Missing timestamp for Node multinode-20210526212238-510955-m02. Assuming now as a timestamp.
	I0526 21:26:08.335583       1 event.go:291] "Event occurred" object="multinode-20210526212238-510955-m02" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-20210526212238-510955-m02 event: Registered Node multinode-20210526212238-510955-m02 in Controller"
	I0526 21:26:19.087324       1 event.go:291] "Event occurred" object="default/busybox" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-6cd5ff77cb to 2"
	I0526 21:26:19.112764       1 event.go:291] "Event occurred" object="default/busybox-6cd5ff77cb" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-6cd5ff77cb-dlslt"
	I0526 21:26:19.134741       1 event.go:291] "Event occurred" object="default/busybox-6cd5ff77cb" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-6cd5ff77cb-4g265"
	W0526 21:27:13.522686       1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-20210526212238-510955-m03" does not exist
	I0526 21:27:13.673351       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-ftdx6"
	I0526 21:27:13.690542       1 range_allocator.go:373] Set node multinode-20210526212238-510955-m03 PodCIDR to [10.244.2.0/24]
	I0526 21:27:13.690722       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-b75lx"
	I0526 21:27:18.341118       1 event.go:291] "Event occurred" object="multinode-20210526212238-510955-m03" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-20210526212238-510955-m03 event: Registered Node multinode-20210526212238-510955-m03 in Controller"
	W0526 21:27:18.341335       1 node_lifecycle_controller.go:1044] Missing timestamp for Node multinode-20210526212238-510955-m03. Assuming now as a timestamp.
	I0526 21:28:08.360412       1 event.go:291] "Event occurred" object="multinode-20210526212238-510955-m03" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-20210526212238-510955-m03 status is now: NodeNotReady"
	I0526 21:28:08.369881       1 event.go:291] "Event occurred" object="kube-system/kube-proxy-ftdx6" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0526 21:28:08.387263       1 event.go:291] "Event occurred" object="kube-system/kindnet-b75lx" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	
	* 
	* ==> kube-proxy [de6efc6fec4b20903efb849817a9e57e284841a9fc6ef9bd1b83e563316cdaa2] <==
	* I0526 21:23:54.629702       1 node.go:172] Successfully retrieved node IP: 192.168.39.229
	I0526 21:23:54.629813       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.39.229), assume IPv4 operation
	W0526 21:23:54.677087       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0526 21:23:54.677377       1 server_others.go:185] Using iptables Proxier.
	I0526 21:23:54.678139       1 server.go:650] Version: v1.20.2
	I0526 21:23:54.678560       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
	I0526 21:23:54.678810       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
	I0526 21:23:54.680271       1 config.go:315] Starting service config controller
	I0526 21:23:54.680366       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0526 21:23:54.680391       1 config.go:224] Starting endpoint slice config controller
	I0526 21:23:54.680396       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0526 21:23:54.780835       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0526 21:23:54.780955       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [e6bb9bee7539aa8f41eb665706dd077c17b8c28a54090cf4fffe439978194c08] <==
	* W0526 21:23:34.796410       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0526 21:23:34.796897       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0526 21:23:34.861412       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0526 21:23:34.862415       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0526 21:23:34.861578       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0526 21:23:34.861594       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0526 21:23:34.865256       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0526 21:23:34.871182       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0526 21:23:34.871367       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0526 21:23:34.871423       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0526 21:23:34.873602       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0526 21:23:34.873877       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0526 21:23:34.874313       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0526 21:23:34.874540       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0526 21:23:34.875162       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0526 21:23:34.875282       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0526 21:23:34.878224       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0526 21:23:34.878386       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0526 21:23:35.699206       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0526 21:23:35.756603       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0526 21:23:35.804897       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0526 21:23:35.812802       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0526 21:23:35.981887       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0526 21:23:36.079577       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0526 21:23:38.862952       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Wed 2021-05-26 21:22:49 UTC, end at Wed 2021-05-26 21:29:01 UTC. --
	May 26 21:23:44 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:44.350035    2767 reconciler.go:157] Reconciler: start to sync state
	May 26 21:23:49 multinode-20210526212238-510955 kubelet[2767]: E0526 21:23:49.171719    2767 kubelet.go:2163] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	May 26 21:23:53 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:53.286184    2767 kuberuntime_manager.go:1006] updating runtime config through cri with podcidr 10.244.0.0/24
	May 26 21:23:53 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:53.292064    2767 kubelet_network.go:77] Setting Pod CIDR:  -> 10.244.0.0/24
	May 26 21:23:53 multinode-20210526212238-510955 kubelet[2767]: E0526 21:23:53.297677    2767 kubelet.go:2163] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	May 26 21:23:53 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:53.473000    2767 topology_manager.go:187] [topologymanager] Topology Admit Handler
	May 26 21:23:53 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:53.588715    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "cni-cfg" (UniqueName: "kubernetes.io/host-path/aac3ff91-8f9c-4f4e-81fc-a859f780d67d-cni-cfg") pod "kindnet-2wgbs" (UID: "aac3ff91-8f9c-4f4e-81fc-a859f780d67d")
	May 26 21:23:53 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:53.589055    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/aac3ff91-8f9c-4f4e-81fc-a859f780d67d-xtables-lock") pod "kindnet-2wgbs" (UID: "aac3ff91-8f9c-4f4e-81fc-a859f780d67d")
	May 26 21:23:53 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:53.589618    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kindnet-token-zm2kt" (UniqueName: "kubernetes.io/secret/aac3ff91-8f9c-4f4e-81fc-a859f780d67d-kindnet-token-zm2kt") pod "kindnet-2wgbs" (UID: "aac3ff91-8f9c-4f4e-81fc-a859f780d67d")
	May 26 21:23:53 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:53.589842    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/aac3ff91-8f9c-4f4e-81fc-a859f780d67d-lib-modules") pod "kindnet-2wgbs" (UID: "aac3ff91-8f9c-4f4e-81fc-a859f780d67d")
	May 26 21:23:53 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:53.611915    2767 topology_manager.go:187] [topologymanager] Topology Admit Handler
	May 26 21:23:53 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:53.791552    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/950a915d-c5f0-4e6f-bc12-ee97013032f0-kube-proxy") pod "kube-proxy-qbl42" (UID: "950a915d-c5f0-4e6f-bc12-ee97013032f0")
	May 26 21:23:53 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:53.791755    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/950a915d-c5f0-4e6f-bc12-ee97013032f0-lib-modules") pod "kube-proxy-qbl42" (UID: "950a915d-c5f0-4e6f-bc12-ee97013032f0")
	May 26 21:23:53 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:53.791904    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy-token-xd4p4" (UniqueName: "kubernetes.io/secret/950a915d-c5f0-4e6f-bc12-ee97013032f0-kube-proxy-token-xd4p4") pod "kube-proxy-qbl42" (UID: "950a915d-c5f0-4e6f-bc12-ee97013032f0")
	May 26 21:23:53 multinode-20210526212238-510955 kubelet[2767]: I0526 21:23:53.792035    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/950a915d-c5f0-4e6f-bc12-ee97013032f0-xtables-lock") pod "kube-proxy-qbl42" (UID: "950a915d-c5f0-4e6f-bc12-ee97013032f0")
	May 26 21:23:54 multinode-20210526212238-510955 kubelet[2767]: E0526 21:23:54.172944    2767 kubelet.go:2163] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	May 26 21:23:56 multinode-20210526212238-510955 kubelet[2767]: E0526 21:23:56.623072    2767 cadvisor_stats_provider.go:401] Partial failure issuing cadvisor.ContainerInfoV2: partial failures: ["/kubepods/besteffort/pod950a915d-c5f0-4e6f-bc12-ee97013032f0/de6efc6fec4b20903efb849817a9e57e284841a9fc6ef9bd1b83e563316cdaa2": RecentStats: unable to find data in memory cache]
	May 26 21:24:08 multinode-20210526212238-510955 kubelet[2767]: I0526 21:24:08.993599    2767 topology_manager.go:187] [topologymanager] Topology Admit Handler
	May 26 21:24:09 multinode-20210526212238-510955 kubelet[2767]: I0526 21:24:09.010021    2767 topology_manager.go:187] [topologymanager] Topology Admit Handler
	May 26 21:24:09 multinode-20210526212238-510955 kubelet[2767]: I0526 21:24:09.159693    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp" (UniqueName: "kubernetes.io/host-path/e29f2487-1ac5-4bb4-9ab8-6b5d5c43ec36-tmp") pod "storage-provisioner" (UID: "e29f2487-1ac5-4bb4-9ab8-6b5d5c43ec36")
	May 26 21:24:09 multinode-20210526212238-510955 kubelet[2767]: I0526 21:24:09.159808    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-7ps8h" (UniqueName: "kubernetes.io/secret/a0522c32-9960-4c21-8a5a-d0b137009166-coredns-token-7ps8h") pod "coredns-74ff55c5b-tw67b" (UID: "a0522c32-9960-4c21-8a5a-d0b137009166")
	May 26 21:24:09 multinode-20210526212238-510955 kubelet[2767]: I0526 21:24:09.159830    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/a0522c32-9960-4c21-8a5a-d0b137009166-config-volume") pod "coredns-74ff55c5b-tw67b" (UID: "a0522c32-9960-4c21-8a5a-d0b137009166")
	May 26 21:24:09 multinode-20210526212238-510955 kubelet[2767]: I0526 21:24:09.159848    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "storage-provisioner-token-hgxxq" (UniqueName: "kubernetes.io/secret/e29f2487-1ac5-4bb4-9ab8-6b5d5c43ec36-storage-provisioner-token-hgxxq") pod "storage-provisioner" (UID: "e29f2487-1ac5-4bb4-9ab8-6b5d5c43ec36")
	May 26 21:26:19 multinode-20210526212238-510955 kubelet[2767]: I0526 21:26:19.145582    2767 topology_manager.go:187] [topologymanager] Topology Admit Handler
	May 26 21:26:19 multinode-20210526212238-510955 kubelet[2767]: I0526 21:26:19.201692    2767 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-cdspv" (UniqueName: "kubernetes.io/secret/07eb6d05-7a0d-41b2-b7f5-13145e0edcdb-default-token-cdspv") pod "busybox-6cd5ff77cb-4g265" (UID: "07eb6d05-7a0d-41b2-b7f5-13145e0edcdb")
	
	* 
	* ==> storage-provisioner [5d3df8c94eaedafbc7bb1e79012cbc72bc2ec88687f4bfc6f10fff8c1eb3107d] <==
	* I0526 21:24:10.174152       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0526 21:24:10.283423       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0526 21:24:10.285296       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0526 21:24:10.325709       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0526 21:24:10.333080       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_multinode-20210526212238-510955_640f1575-3f2b-423b-9f51-48a3198dc1b4!
	I0526 21:24:10.329407       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"694e5be2-46cf-4c76-aeac-70628468e6a3", APIVersion:"v1", ResourceVersion:"496", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' multinode-20210526212238-510955_640f1575-3f2b-423b-9f51-48a3198dc1b4 became leader
	I0526 21:24:10.440994       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_multinode-20210526212238-510955_640f1575-3f2b-423b-9f51-48a3198dc1b4!
	

                                                
                                                
-- /stdout --
helpers_test.go:250: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-20210526212238-510955 -n multinode-20210526212238-510955
helpers_test.go:257: (dbg) Run:  kubectl --context multinode-20210526212238-510955 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:263: non-running pods: 
helpers_test.go:265: ======> post-mortem[TestMultiNode/serial/StartAfterStop]: describe non-running pods <======
helpers_test.go:268: (dbg) Run:  kubectl --context multinode-20210526212238-510955 describe pod 
helpers_test.go:268: (dbg) Non-zero exit: kubectl --context multinode-20210526212238-510955 describe pod : exit status 1 (45.872671ms)

                                                
                                                
** stderr ** 
	error: resource name may not be empty

                                                
                                                
** /stderr **
helpers_test.go:270: kubectl --context multinode-20210526212238-510955 describe pod : exit status 1
--- FAIL: TestMultiNode/serial/StartAfterStop (10.09s)

                                                
                                    
x
+
TestScheduledStopUnix (267.65s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:126: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-20210526214323-510955 --memory=2048 --driver=kvm2  --container-runtime=containerd
E0526 21:44:19.840235  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/addons-20210526204012-510955/client.crt: no such file or directory
scheduled_stop_test.go:126: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-20210526214323-510955 --memory=2048 --driver=kvm2  --container-runtime=containerd: (1m2.622281678s)
scheduled_stop_test.go:135: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20210526214323-510955 --schedule 5m
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-20210526214323-510955 -n scheduled-stop-20210526214323-510955
scheduled_stop_test.go:167: signal error was:  <nil>
scheduled_stop_test.go:135: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20210526214323-510955 --schedule 8s
scheduled_stop_test.go:167: signal error was:  os: process already finished
scheduled_stop_test.go:135: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20210526214323-510955 --cancel-scheduled
scheduled_stop_test.go:174: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20210526214323-510955 -n scheduled-stop-20210526214323-510955
scheduled_stop_test.go:203: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-20210526214323-510955
scheduled_stop_test.go:135: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20210526214323-510955 --schedule 5s
scheduled_stop_test.go:167: signal error was:  os: process already finished
scheduled_stop_test.go:203: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-20210526214323-510955
scheduled_stop_test.go:174: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20210526214323-510955 -n scheduled-stop-20210526214323-510955
scheduled_stop_test.go:174: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20210526214323-510955 -n scheduled-stop-20210526214323-510955
scheduled_stop_test.go:174: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20210526214323-510955 -n scheduled-stop-20210526214323-510955
scheduled_stop_test.go:174: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20210526214323-510955 -n scheduled-stop-20210526214323-510955
scheduled_stop_test.go:174: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20210526214323-510955 -n scheduled-stop-20210526214323-510955: signal: killed (10.002539261s)
scheduled_stop_test.go:174: status error: signal: killed (may be ok)
scheduled_stop_test.go:174: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20210526214323-510955 -n scheduled-stop-20210526214323-510955
scheduled_stop_test.go:174: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20210526214323-510955 -n scheduled-stop-20210526214323-510955: signal: killed (10.002504917s)
scheduled_stop_test.go:174: status error: signal: killed (may be ok)
scheduled_stop_test.go:174: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20210526214323-510955 -n scheduled-stop-20210526214323-510955
scheduled_stop_test.go:174: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20210526214323-510955 -n scheduled-stop-20210526214323-510955: signal: killed (10.002833431s)
scheduled_stop_test.go:174: status error: signal: killed (may be ok)
scheduled_stop_test.go:174: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20210526214323-510955 -n scheduled-stop-20210526214323-510955
scheduled_stop_test.go:174: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20210526214323-510955 -n scheduled-stop-20210526214323-510955: signal: killed (10.002710412s)
scheduled_stop_test.go:174: status error: signal: killed (may be ok)
scheduled_stop_test.go:181: error expected post-stop "Host" status to be -"Stopped"- but got *""*
panic.go:613: *** TestScheduledStopUnix FAILED at 2021-05-26 21:45:39.851119461 +0000 UTC m=+3974.968667883
helpers_test.go:218: -----------------------post-mortem--------------------------------
helpers_test.go:235: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20210526214323-510955 -n scheduled-stop-20210526214323-510955
E0526 21:47:22.888378  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/addons-20210526204012-510955/client.crt: no such file or directory
helpers_test.go:235: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20210526214323-510955 -n scheduled-stop-20210526214323-510955: exit status 3 (2m10.064519344s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0526 21:47:49.913463  555611 status.go:374] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.29:22: connect: connection timed out
	E0526 21:47:49.913489  555611 status.go:247] status error: NewSession: new client: new client: dial tcp 192.168.39.29:22: connect: connection timed out

                                                
                                                
** /stderr **
helpers_test.go:235: status error: exit status 3 (may be ok)
helpers_test.go:237: "scheduled-stop-20210526214323-510955" host is not running, skipping log retrieval (state="Error")
helpers_test.go:171: Cleaning up "scheduled-stop-20210526214323-510955" profile ...
helpers_test.go:174: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-20210526214323-510955
E0526 21:47:50.134574  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/functional-20210526211257-510955/client.crt: no such file or directory
--- FAIL: TestScheduledStopUnix (267.65s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (899.6s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:119: (dbg) Run:  /tmp/minikube-v1.6.2.191947014.exe start -p running-upgrade-20210526215018-510955 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:119: (dbg) Done: /tmp/minikube-v1.6.2.191947014.exe start -p running-upgrade-20210526215018-510955 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd: (1m50.302677396s)
version_upgrade_test.go:129: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-20210526215018-510955 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
E0526 21:52:50.135193  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/functional-20210526211257-510955/client.crt: no such file or directory

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:129: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p running-upgrade-20210526215018-510955 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: exit status 109 (13m6.576198033s)

                                                
                                                
-- stdout --
	* [running-upgrade-20210526215018-510955] minikube v1.20.0 on Debian 9.13 (kvm/amd64)
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/kubeconfig
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube
	  - MINIKUBE_LOCATION=11504
	* Kubernetes 1.20.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.20.2
	* Using the kvm2 driver based on existing profile
	* Starting control plane node running-upgrade-20210526215018-510955 in cluster running-upgrade-20210526215018-510955
	* Updating the running kvm2 "running-upgrade-20210526215018-510955" VM ...
	* Preparing Kubernetes v1.17.0 on containerd 1.2.10 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0526 21:52:09.803851  558359 out.go:291] Setting OutFile to fd 1 ...
	I0526 21:52:09.804016  558359 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0526 21:52:09.804026  558359 out.go:304] Setting ErrFile to fd 2...
	I0526 21:52:09.804031  558359 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0526 21:52:09.804136  558359 root.go:316] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/bin
	I0526 21:52:09.804376  558359 out.go:298] Setting JSON to false
	I0526 21:52:09.839789  558359 start.go:110] hostinfo: {"hostname":"debian-jenkins-agent-4","uptime":20092,"bootTime":1622045838,"procs":174,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-15-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0526 21:52:09.839882  558359 start.go:120] virtualization: kvm guest
	I0526 21:52:09.842299  558359 out.go:170] * [running-upgrade-20210526215018-510955] minikube v1.20.0 on Debian 9.13 (kvm/amd64)
	I0526 21:52:09.843990  558359 out.go:170]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/kubeconfig
	I0526 21:52:09.845489  558359 out.go:170]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0526 21:52:09.846862  558359 out.go:170]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube
	I0526 21:52:09.848348  558359 out.go:170]   - MINIKUBE_LOCATION=11504
	I0526 21:52:09.848704  558359 start_flags.go:473] config upgrade: Driver=kvm2
	I0526 21:52:09.848720  558359 start_flags.go:485] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase-builds:v0.0.22-1620785771-11384@sha256:f5844fe35994179bbad8dda27d4912304a2fedccdf0bf93ce8b2ec2b3b83af1c
	I0526 21:52:09.848804  558359 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/running-upgrade-20210526215018-510955/config.json ...
	I0526 21:52:09.849327  558359 main.go:128] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0526 21:52:09.849399  558359 main.go:128] libmachine: Launching plugin server for driver kvm2
	I0526 21:52:09.860166  558359 main.go:128] libmachine: Plugin server listening at address 127.0.0.1:45369
	I0526 21:52:09.860624  558359 main.go:128] libmachine: () Calling .GetVersion
	I0526 21:52:09.861148  558359 main.go:128] libmachine: Using API Version  1
	I0526 21:52:09.861169  558359 main.go:128] libmachine: () Calling .SetConfigRaw
	I0526 21:52:09.861528  558359 main.go:128] libmachine: () Calling .GetMachineName
	I0526 21:52:09.861712  558359 main.go:128] libmachine: (running-upgrade-20210526215018-510955) Calling .DriverName
	I0526 21:52:09.864036  558359 out.go:170] * Kubernetes 1.20.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.20.2
	I0526 21:52:09.864085  558359 driver.go:331] Setting default libvirt URI to qemu:///system
	I0526 21:52:09.864447  558359 main.go:128] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0526 21:52:09.864488  558359 main.go:128] libmachine: Launching plugin server for driver kvm2
	I0526 21:52:09.876219  558359 main.go:128] libmachine: Plugin server listening at address 127.0.0.1:37191
	I0526 21:52:09.876653  558359 main.go:128] libmachine: () Calling .GetVersion
	I0526 21:52:09.877228  558359 main.go:128] libmachine: Using API Version  1
	I0526 21:52:09.877258  558359 main.go:128] libmachine: () Calling .SetConfigRaw
	I0526 21:52:09.877644  558359 main.go:128] libmachine: () Calling .GetMachineName
	I0526 21:52:09.877872  558359 main.go:128] libmachine: (running-upgrade-20210526215018-510955) Calling .DriverName
	I0526 21:52:09.908420  558359 out.go:170] * Using the kvm2 driver based on existing profile
	I0526 21:52:09.908448  558359 start.go:278] selected driver: kvm2
	I0526 21:52:09.908454  558359 start.go:751] validating driver "kvm2" against &{Name:running-upgrade-20210526215018-510955 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.22-1620785771-11384@sha256:f5844fe35994179bbad8dda27d4912304a2fedccdf0bf93ce8b2ec2b3b83af1c Memory:2200 CPUs:2 DiskSize:20000 VMDriver:kvm2 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 Cluste
rName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.50.63 Port:8443 KubernetesVersion:v1.17.0 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	I0526 21:52:09.908585  558359 start.go:762] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0526 21:52:09.909242  558359 install.go:51] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0526 21:52:09.909395  558359 install.go:116] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0526 21:52:09.920963  558359 install.go:136] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.20.0
	I0526 21:52:09.921043  558359 cni.go:93] Creating CNI manager for ""
	I0526 21:52:09.921053  558359 cni.go:142] EnableDefaultCNI is true, recommending bridge
	I0526 21:52:09.921068  558359 start_flags.go:273] config:
	{Name:running-upgrade-20210526215018-510955 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.22-1620785771-11384@sha256:f5844fe35994179bbad8dda27d4912304a2fedccdf0bf93ce8b2ec2b3b83af1c Memory:2200 CPUs:2 DiskSize:20000 VMDriver:kvm2 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] D
NSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.50.63 Port:8443 KubernetesVersion:v1.17.0 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	I0526 21:52:09.921155  558359 iso.go:123] acquiring lock: {Name:mkae6243686e006cb5174618a31875b12ffbed81 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0526 21:52:09.922972  558359 out.go:170] * Starting control plane node running-upgrade-20210526215018-510955 in cluster running-upgrade-20210526215018-510955
	I0526 21:52:09.922992  558359 preload.go:98] Checking if preload exists for k8s version v1.17.0 and runtime containerd
	W0526 21:52:10.029725  558359 preload.go:119] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.17.0-containerd-overlay2-amd64.tar.lz4 status code: 404
	I0526 21:52:10.029919  558359 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/running-upgrade-20210526215018-510955/config.json ...
	I0526 21:52:10.030100  558359 cache.go:108] acquiring lock: {Name:mk1f8d1596dae0678a4382cc12c2651bcd889747 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0526 21:52:10.030121  558359 cache.go:108] acquiring lock: {Name:mk2f8ceca8f30e3cca1664a31ab426848054fea8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0526 21:52:10.030155  558359 cache.go:108] acquiring lock: {Name:mke246f85e04fafcfe0916f369cc73cb7c8aa208 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0526 21:52:10.030154  558359 cache.go:108] acquiring lock: {Name:mk7c3eaed38bdddedbb81164ca1c2426542d3001 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0526 21:52:10.030188  558359 cache.go:108] acquiring lock: {Name:mkdd0890718fe63cc9de68c451fd67ea6144efb4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0526 21:52:10.030228  558359 cache.go:108] acquiring lock: {Name:mk22116c0ad376cadb0916c6d017e3f4802f859e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0526 21:52:10.030244  558359 cache.go:108] acquiring lock: {Name:mkc1307ce19322cc41cb38861b70d72163c7c394 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0526 21:52:10.030257  558359 cache.go:108] acquiring lock: {Name:mk5b8c18f6c4b09710b17bc3abbf692f9764a7b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0526 21:52:10.030241  558359 cache.go:108] acquiring lock: {Name:mk92598a97bfe9f214b7aafebe4c5728a886bc1a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0526 21:52:10.030300  558359 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/k8s.gcr.io/pause_3.1 exists
	I0526 21:52:10.030305  558359 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.17.0 exists
	I0526 21:52:10.030324  558359 cache.go:97] cache image "k8s.gcr.io/pause:3.1" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/k8s.gcr.io/pause_3.1" took 215.985µs
	I0526 21:52:10.030327  558359 cache.go:97] cache image "k8s.gcr.io/kube-controller-manager:v1.17.0" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.17.0" took 197.498µs
	I0526 21:52:10.030339  558359 cache.go:81] save to tar file k8s.gcr.io/pause:3.1 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/k8s.gcr.io/pause_3.1 succeeded
	I0526 21:52:10.030348  558359 cache.go:81] save to tar file k8s.gcr.io/kube-controller-manager:v1.17.0 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.17.0 succeeded
	I0526 21:52:10.030344  558359 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0526 21:52:10.030344  558359 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.17.0 exists
	I0526 21:52:10.030372  558359 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.17.0 exists
	I0526 21:52:10.030376  558359 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.17.0 exists
	I0526 21:52:10.030374  558359 cache.go:97] cache image "k8s.gcr.io/kube-scheduler:v1.17.0" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.17.0" took 240.796µs
	I0526 21:52:10.030374  558359 cache.go:97] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5" took 147.354µs
	I0526 21:52:10.030387  558359 cache.go:81] save to tar file k8s.gcr.io/kube-scheduler:v1.17.0 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.17.0 succeeded
	I0526 21:52:10.030389  558359 cache.go:97] cache image "k8s.gcr.io/kube-proxy:v1.17.0" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.17.0" took 221.602µs
	I0526 21:52:10.030392  558359 cache.go:81] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0526 21:52:10.030399  558359 cache.go:81] save to tar file k8s.gcr.io/kube-proxy:v1.17.0 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.17.0 succeeded
	I0526 21:52:10.030396  558359 cache.go:97] cache image "k8s.gcr.io/kube-apiserver:v1.17.0" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.17.0" took 153.488µs
	I0526 21:52:10.030403  558359 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4 exists
	I0526 21:52:10.030413  558359 cache.go:81] save to tar file k8s.gcr.io/kube-apiserver:v1.17.0 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.17.0 succeeded
	I0526 21:52:10.030425  558359 cache.go:97] cache image "docker.io/kubernetesui/metrics-scraper:v1.0.4" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4" took 335.181µs
	I0526 21:52:10.030428  558359 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/k8s.gcr.io/coredns_1.6.5 exists
	I0526 21:52:10.030437  558359 cache.go:81] save to tar file docker.io/kubernetesui/metrics-scraper:v1.0.4 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4 succeeded
	I0526 21:52:10.030431  558359 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0 exists
	I0526 21:52:10.030448  558359 cache.go:97] cache image "k8s.gcr.io/coredns:1.6.5" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/k8s.gcr.io/coredns_1.6.5" took 249.48µs
	I0526 21:52:10.030459  558359 cache.go:81] save to tar file k8s.gcr.io/coredns:1.6.5 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/k8s.gcr.io/coredns_1.6.5 succeeded
	I0526 21:52:10.030471  558359 cache.go:97] cache image "docker.io/kubernetesui/dashboard:v2.1.0" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0" took 214.815µs
	I0526 21:52:10.030490  558359 cache.go:81] save to tar file docker.io/kubernetesui/dashboard:v2.1.0 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0 succeeded
	I0526 21:52:10.030470  558359 cache.go:108] acquiring lock: {Name:mk0ecf94b4dd287cf13a54b3185a2b74dcd13557 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0526 21:52:10.030613  558359 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/k8s.gcr.io/etcd_3.4.3-0 exists
	I0526 21:52:10.030636  558359 cache.go:97] cache image "k8s.gcr.io/etcd:3.4.3-0" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/k8s.gcr.io/etcd_3.4.3-0" took 206.18µs
	I0526 21:52:10.030654  558359 cache.go:81] save to tar file k8s.gcr.io/etcd:3.4.3-0 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/k8s.gcr.io/etcd_3.4.3-0 succeeded
	I0526 21:52:10.030672  558359 cache.go:88] Successfully saved all images to host disk.
	I0526 21:52:10.530623  558359 cache.go:191] Successfully downloaded all kic artifacts
	I0526 21:52:10.530693  558359 start.go:313] acquiring machines lock for running-upgrade-20210526215018-510955: {Name:mk9b6c43d31e9eaa4b66476ed1274ba5b188c66b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0526 21:52:10.530785  558359 start.go:317] acquired machines lock for "running-upgrade-20210526215018-510955" in 70.48µs
	I0526 21:52:10.530826  558359 start.go:93] Skipping create...Using existing machine configuration
	I0526 21:52:10.530837  558359 fix.go:55] fixHost starting: minikube
	I0526 21:52:10.531155  558359 main.go:128] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0526 21:52:10.531192  558359 main.go:128] libmachine: Launching plugin server for driver kvm2
	I0526 21:52:10.542124  558359 main.go:128] libmachine: Plugin server listening at address 127.0.0.1:43467
	I0526 21:52:10.542571  558359 main.go:128] libmachine: () Calling .GetVersion
	I0526 21:52:10.543143  558359 main.go:128] libmachine: Using API Version  1
	I0526 21:52:10.543168  558359 main.go:128] libmachine: () Calling .SetConfigRaw
	I0526 21:52:10.543522  558359 main.go:128] libmachine: () Calling .GetMachineName
	I0526 21:52:10.543711  558359 main.go:128] libmachine: (running-upgrade-20210526215018-510955) Calling .DriverName
	I0526 21:52:10.543871  558359 main.go:128] libmachine: (running-upgrade-20210526215018-510955) Calling .GetState
	I0526 21:52:10.547222  558359 fix.go:108] recreateIfNeeded on running-upgrade-20210526215018-510955: state=Running err=<nil>
	W0526 21:52:10.547243  558359 fix.go:134] unexpected machine state, will restart: <nil>
	I0526 21:52:10.550028  558359 out.go:170] * Updating the running kvm2 "running-upgrade-20210526215018-510955" VM ...
	I0526 21:52:10.550058  558359 main.go:128] libmachine: (running-upgrade-20210526215018-510955) Calling .DriverName
	I0526 21:52:10.550231  558359 machine.go:88] provisioning docker machine ...
	I0526 21:52:10.550254  558359 main.go:128] libmachine: (running-upgrade-20210526215018-510955) Calling .DriverName
	I0526 21:52:10.550404  558359 main.go:128] libmachine: (running-upgrade-20210526215018-510955) Calling .GetMachineName
	I0526 21:52:10.550588  558359 buildroot.go:166] provisioning hostname "running-upgrade-20210526215018-510955"
	I0526 21:52:10.550610  558359 main.go:128] libmachine: (running-upgrade-20210526215018-510955) Calling .GetMachineName
	I0526 21:52:10.550737  558359 main.go:128] libmachine: (running-upgrade-20210526215018-510955) Calling .GetSSHHostname
	I0526 21:52:10.556062  558359 main.go:128] libmachine: (running-upgrade-20210526215018-510955) DBG | domain running-upgrade-20210526215018-510955 has defined MAC address 52:54:00:b5:85:87 in network minikube-net
	I0526 21:52:10.556494  558359 main.go:128] libmachine: (running-upgrade-20210526215018-510955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:85:87", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2021-05-26 22:50:51 +0000 UTC Type:0 Mac:52:54:00:b5:85:87 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:running-upgrade-20210526215018-510955 Clientid:01:52:54:00:b5:85:87}
	I0526 21:52:10.556516  558359 main.go:128] libmachine: (running-upgrade-20210526215018-510955) DBG | domain running-upgrade-20210526215018-510955 has defined IP address 192.168.50.63 and MAC address 52:54:00:b5:85:87 in network minikube-net
	I0526 21:52:10.556629  558359 main.go:128] libmachine: (running-upgrade-20210526215018-510955) Calling .GetSSHPort
	I0526 21:52:10.556781  558359 main.go:128] libmachine: (running-upgrade-20210526215018-510955) Calling .GetSSHKeyPath
	I0526 21:52:10.556958  558359 main.go:128] libmachine: (running-upgrade-20210526215018-510955) Calling .GetSSHKeyPath
	I0526 21:52:10.557086  558359 main.go:128] libmachine: (running-upgrade-20210526215018-510955) Calling .GetSSHUsername
	I0526 21:52:10.557243  558359 main.go:128] libmachine: Using SSH client type: native
	I0526 21:52:10.557405  558359 main.go:128] libmachine: &{{{<nil> 0 [] [] []} docker [0x802c00] 0x802bc0 <nil>  [] 0s} 192.168.50.63 22 <nil> <nil>}
	I0526 21:52:10.557419  558359 main.go:128] libmachine: About to run SSH command:
	sudo hostname running-upgrade-20210526215018-510955 && echo "running-upgrade-20210526215018-510955" | sudo tee /etc/hostname
	I0526 21:52:10.694378  558359 main.go:128] libmachine: SSH cmd err, output: <nil>: running-upgrade-20210526215018-510955
	
	I0526 21:52:10.694410  558359 main.go:128] libmachine: (running-upgrade-20210526215018-510955) Calling .GetSSHHostname
	I0526 21:52:10.699546  558359 main.go:128] libmachine: (running-upgrade-20210526215018-510955) DBG | domain running-upgrade-20210526215018-510955 has defined MAC address 52:54:00:b5:85:87 in network minikube-net
	I0526 21:52:10.699875  558359 main.go:128] libmachine: (running-upgrade-20210526215018-510955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:85:87", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2021-05-26 22:50:51 +0000 UTC Type:0 Mac:52:54:00:b5:85:87 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:running-upgrade-20210526215018-510955 Clientid:01:52:54:00:b5:85:87}
	I0526 21:52:10.699913  558359 main.go:128] libmachine: (running-upgrade-20210526215018-510955) DBG | domain running-upgrade-20210526215018-510955 has defined IP address 192.168.50.63 and MAC address 52:54:00:b5:85:87 in network minikube-net
	I0526 21:52:10.700006  558359 main.go:128] libmachine: (running-upgrade-20210526215018-510955) Calling .GetSSHPort
	I0526 21:52:10.700189  558359 main.go:128] libmachine: (running-upgrade-20210526215018-510955) Calling .GetSSHKeyPath
	I0526 21:52:10.700347  558359 main.go:128] libmachine: (running-upgrade-20210526215018-510955) Calling .GetSSHKeyPath
	I0526 21:52:10.700491  558359 main.go:128] libmachine: (running-upgrade-20210526215018-510955) Calling .GetSSHUsername
	I0526 21:52:10.700630  558359 main.go:128] libmachine: Using SSH client type: native
	I0526 21:52:10.700769  558359 main.go:128] libmachine: &{{{<nil> 0 [] [] []} docker [0x802c00] 0x802bc0 <nil>  [] 0s} 192.168.50.63 22 <nil> <nil>}
	I0526 21:52:10.700788  558359 main.go:128] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-20210526215018-510955' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-20210526215018-510955/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-20210526215018-510955' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0526 21:52:10.829526  558359 main.go:128] libmachine: SSH cmd err, output: <nil>: 
	I0526 21:52:10.829556  558359 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikub
e/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube}
	I0526 21:52:10.829601  558359 buildroot.go:174] setting up certificates
	I0526 21:52:10.829612  558359 provision.go:83] configureAuth start
	I0526 21:52:10.829625  558359 main.go:128] libmachine: (running-upgrade-20210526215018-510955) Calling .GetMachineName
	I0526 21:52:10.829882  558359 main.go:128] libmachine: (running-upgrade-20210526215018-510955) Calling .GetIP
	I0526 21:52:10.834679  558359 main.go:128] libmachine: (running-upgrade-20210526215018-510955) DBG | domain running-upgrade-20210526215018-510955 has defined MAC address 52:54:00:b5:85:87 in network minikube-net
	I0526 21:52:10.835001  558359 main.go:128] libmachine: (running-upgrade-20210526215018-510955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:85:87", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2021-05-26 22:50:51 +0000 UTC Type:0 Mac:52:54:00:b5:85:87 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:running-upgrade-20210526215018-510955 Clientid:01:52:54:00:b5:85:87}
	I0526 21:52:10.835039  558359 main.go:128] libmachine: (running-upgrade-20210526215018-510955) DBG | domain running-upgrade-20210526215018-510955 has defined IP address 192.168.50.63 and MAC address 52:54:00:b5:85:87 in network minikube-net
	I0526 21:52:10.835141  558359 main.go:128] libmachine: (running-upgrade-20210526215018-510955) Calling .GetSSHHostname
	I0526 21:52:10.839438  558359 main.go:128] libmachine: (running-upgrade-20210526215018-510955) DBG | domain running-upgrade-20210526215018-510955 has defined MAC address 52:54:00:b5:85:87 in network minikube-net
	I0526 21:52:10.839711  558359 main.go:128] libmachine: (running-upgrade-20210526215018-510955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:85:87", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2021-05-26 22:50:51 +0000 UTC Type:0 Mac:52:54:00:b5:85:87 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:running-upgrade-20210526215018-510955 Clientid:01:52:54:00:b5:85:87}
	I0526 21:52:10.839743  558359 main.go:128] libmachine: (running-upgrade-20210526215018-510955) DBG | domain running-upgrade-20210526215018-510955 has defined IP address 192.168.50.63 and MAC address 52:54:00:b5:85:87 in network minikube-net
	I0526 21:52:10.839853  558359 provision.go:137] copyHostCerts
	I0526 21:52:10.839905  558359 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/ca.pem, removing ...
	I0526 21:52:10.839914  558359 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/ca.pem
	I0526 21:52:10.839954  558359 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/ca.pem (1078 bytes)
	I0526 21:52:10.840046  558359 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cert.pem, removing ...
	I0526 21:52:10.840057  558359 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cert.pem
	I0526 21:52:10.840074  558359 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cert.pem (1123 bytes)
	I0526 21:52:10.840121  558359 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/key.pem, removing ...
	I0526 21:52:10.840128  558359 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/key.pem
	I0526 21:52:10.840142  558359 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/key.pem (1679 bytes)
	I0526 21:52:10.840180  558359 provision.go:111] generating server cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-20210526215018-510955 san=[192.168.50.63 192.168.50.63 localhost 127.0.0.1 minikube running-upgrade-20210526215018-510955]
	I0526 21:52:10.964354  558359 provision.go:171] copyRemoteCerts
	I0526 21:52:10.964400  558359 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0526 21:52:10.964417  558359 main.go:128] libmachine: (running-upgrade-20210526215018-510955) Calling .GetSSHHostname
	I0526 21:52:10.968911  558359 main.go:128] libmachine: (running-upgrade-20210526215018-510955) DBG | domain running-upgrade-20210526215018-510955 has defined MAC address 52:54:00:b5:85:87 in network minikube-net
	I0526 21:52:10.969613  558359 main.go:128] libmachine: (running-upgrade-20210526215018-510955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:85:87", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2021-05-26 22:50:51 +0000 UTC Type:0 Mac:52:54:00:b5:85:87 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:running-upgrade-20210526215018-510955 Clientid:01:52:54:00:b5:85:87}
	I0526 21:52:10.969616  558359 main.go:128] libmachine: (running-upgrade-20210526215018-510955) Calling .GetSSHPort
	I0526 21:52:10.969635  558359 main.go:128] libmachine: (running-upgrade-20210526215018-510955) DBG | domain running-upgrade-20210526215018-510955 has defined IP address 192.168.50.63 and MAC address 52:54:00:b5:85:87 in network minikube-net
	I0526 21:52:10.969833  558359 main.go:128] libmachine: (running-upgrade-20210526215018-510955) Calling .GetSSHKeyPath
	I0526 21:52:10.969987  558359 main.go:128] libmachine: (running-upgrade-20210526215018-510955) Calling .GetSSHUsername
	I0526 21:52:10.970111  558359 sshutil.go:53] new ssh client: &{IP:192.168.50.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/running-upgrade-20210526215018-510955/id_rsa Username:docker}
	I0526 21:52:11.060795  558359 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0526 21:52:11.079402  558359 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0526 21:52:11.097955  558359 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/server.pem --> /etc/docker/server.pem (1281 bytes)
	I0526 21:52:11.118202  558359 provision.go:86] duration metric: configureAuth took 288.57882ms
	I0526 21:52:11.118224  558359 buildroot.go:189] setting minikube options for container-runtime
	I0526 21:52:11.118340  558359 machine.go:91] provisioned docker machine in 568.094128ms
	I0526 21:52:11.118352  558359 start.go:267] post-start starting for "running-upgrade-20210526215018-510955" (driver="kvm2")
	I0526 21:52:11.118358  558359 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0526 21:52:11.118378  558359 main.go:128] libmachine: (running-upgrade-20210526215018-510955) Calling .DriverName
	I0526 21:52:11.118671  558359 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0526 21:52:11.118706  558359 main.go:128] libmachine: (running-upgrade-20210526215018-510955) Calling .GetSSHHostname
	I0526 21:52:11.123943  558359 main.go:128] libmachine: (running-upgrade-20210526215018-510955) DBG | domain running-upgrade-20210526215018-510955 has defined MAC address 52:54:00:b5:85:87 in network minikube-net
	I0526 21:52:11.124322  558359 main.go:128] libmachine: (running-upgrade-20210526215018-510955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:85:87", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2021-05-26 22:50:51 +0000 UTC Type:0 Mac:52:54:00:b5:85:87 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:running-upgrade-20210526215018-510955 Clientid:01:52:54:00:b5:85:87}
	I0526 21:52:11.124354  558359 main.go:128] libmachine: (running-upgrade-20210526215018-510955) DBG | domain running-upgrade-20210526215018-510955 has defined IP address 192.168.50.63 and MAC address 52:54:00:b5:85:87 in network minikube-net
	I0526 21:52:11.124496  558359 main.go:128] libmachine: (running-upgrade-20210526215018-510955) Calling .GetSSHPort
	I0526 21:52:11.124715  558359 main.go:128] libmachine: (running-upgrade-20210526215018-510955) Calling .GetSSHKeyPath
	I0526 21:52:11.124908  558359 main.go:128] libmachine: (running-upgrade-20210526215018-510955) Calling .GetSSHUsername
	I0526 21:52:11.125040  558359 sshutil.go:53] new ssh client: &{IP:192.168.50.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/running-upgrade-20210526215018-510955/id_rsa Username:docker}
	I0526 21:52:11.217295  558359 ssh_runner.go:149] Run: cat /etc/os-release
	I0526 21:52:11.221703  558359 info.go:137] Remote host: Buildroot 2019.02.7
	I0526 21:52:11.221725  558359 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/addons for local assets ...
	I0526 21:52:11.221769  558359 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/files for local assets ...
	I0526 21:52:11.221874  558359 start.go:270] post-start completed in 103.512614ms
	I0526 21:52:11.221898  558359 main.go:128] libmachine: (running-upgrade-20210526215018-510955) Calling .DriverName
	I0526 21:52:11.222158  558359 main.go:128] libmachine: (running-upgrade-20210526215018-510955) Calling .GetSSHHostname
	I0526 21:52:11.227292  558359 main.go:128] libmachine: (running-upgrade-20210526215018-510955) DBG | domain running-upgrade-20210526215018-510955 has defined MAC address 52:54:00:b5:85:87 in network minikube-net
	I0526 21:52:11.227681  558359 main.go:128] libmachine: (running-upgrade-20210526215018-510955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:85:87", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2021-05-26 22:50:51 +0000 UTC Type:0 Mac:52:54:00:b5:85:87 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:running-upgrade-20210526215018-510955 Clientid:01:52:54:00:b5:85:87}
	I0526 21:52:11.227708  558359 main.go:128] libmachine: (running-upgrade-20210526215018-510955) DBG | domain running-upgrade-20210526215018-510955 has defined IP address 192.168.50.63 and MAC address 52:54:00:b5:85:87 in network minikube-net
	I0526 21:52:11.227843  558359 main.go:128] libmachine: (running-upgrade-20210526215018-510955) Calling .GetSSHPort
	I0526 21:52:11.228015  558359 main.go:128] libmachine: (running-upgrade-20210526215018-510955) Calling .GetSSHKeyPath
	I0526 21:52:11.228199  558359 main.go:128] libmachine: (running-upgrade-20210526215018-510955) Calling .GetSSHKeyPath
	I0526 21:52:11.228337  558359 main.go:128] libmachine: (running-upgrade-20210526215018-510955) Calling .GetSSHUsername
	I0526 21:52:11.228500  558359 main.go:128] libmachine: Using SSH client type: native
	I0526 21:52:11.228628  558359 main.go:128] libmachine: &{{{<nil> 0 [] [] []} docker [0x802c00] 0x802bc0 <nil>  [] 0s} 192.168.50.63 22 <nil> <nil>}
	I0526 21:52:11.228638  558359 main.go:128] libmachine: About to run SSH command:
	date +%s.%N
	I0526 21:52:11.353626  558359 main.go:128] libmachine: SSH cmd err, output: <nil>: 1622065931.353874250
	
	I0526 21:52:11.353649  558359 fix.go:212] guest clock: 1622065931.353874250
	I0526 21:52:11.353660  558359 fix.go:225] Guest: 2021-05-26 21:52:11.35387425 +0000 UTC Remote: 2021-05-26 21:52:11.222139461 +0000 UTC m=+1.471382038 (delta=131.734789ms)
	I0526 21:52:11.353685  558359 fix.go:196] guest clock delta is within tolerance: 131.734789ms
	I0526 21:52:11.353692  558359 fix.go:57] fixHost completed within 822.854626ms
	I0526 21:52:11.353698  558359 start.go:80] releasing machines lock for "running-upgrade-20210526215018-510955", held for 822.9014ms
	I0526 21:52:11.353736  558359 main.go:128] libmachine: (running-upgrade-20210526215018-510955) Calling .DriverName
	I0526 21:52:11.353975  558359 main.go:128] libmachine: (running-upgrade-20210526215018-510955) Calling .GetIP
	I0526 21:52:11.358950  558359 main.go:128] libmachine: (running-upgrade-20210526215018-510955) DBG | domain running-upgrade-20210526215018-510955 has defined MAC address 52:54:00:b5:85:87 in network minikube-net
	I0526 21:52:11.359261  558359 main.go:128] libmachine: (running-upgrade-20210526215018-510955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:85:87", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2021-05-26 22:50:51 +0000 UTC Type:0 Mac:52:54:00:b5:85:87 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:running-upgrade-20210526215018-510955 Clientid:01:52:54:00:b5:85:87}
	I0526 21:52:11.359298  558359 main.go:128] libmachine: (running-upgrade-20210526215018-510955) DBG | domain running-upgrade-20210526215018-510955 has defined IP address 192.168.50.63 and MAC address 52:54:00:b5:85:87 in network minikube-net
	I0526 21:52:11.359421  558359 main.go:128] libmachine: (running-upgrade-20210526215018-510955) Calling .DriverName
	I0526 21:52:11.359580  558359 main.go:128] libmachine: (running-upgrade-20210526215018-510955) Calling .DriverName
	I0526 21:52:11.360003  558359 main.go:128] libmachine: (running-upgrade-20210526215018-510955) Calling .DriverName
	I0526 21:52:11.360252  558359 ssh_runner.go:149] Run: systemctl --version
	I0526 21:52:11.360276  558359 main.go:128] libmachine: (running-upgrade-20210526215018-510955) Calling .GetSSHHostname
	I0526 21:52:11.360290  558359 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0526 21:52:11.360330  558359 main.go:128] libmachine: (running-upgrade-20210526215018-510955) Calling .GetSSHHostname
	I0526 21:52:11.365046  558359 main.go:128] libmachine: (running-upgrade-20210526215018-510955) DBG | domain running-upgrade-20210526215018-510955 has defined MAC address 52:54:00:b5:85:87 in network minikube-net
	I0526 21:52:11.365333  558359 main.go:128] libmachine: (running-upgrade-20210526215018-510955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:85:87", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2021-05-26 22:50:51 +0000 UTC Type:0 Mac:52:54:00:b5:85:87 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:running-upgrade-20210526215018-510955 Clientid:01:52:54:00:b5:85:87}
	I0526 21:52:11.365367  558359 main.go:128] libmachine: (running-upgrade-20210526215018-510955) DBG | domain running-upgrade-20210526215018-510955 has defined IP address 192.168.50.63 and MAC address 52:54:00:b5:85:87 in network minikube-net
	I0526 21:52:11.365467  558359 main.go:128] libmachine: (running-upgrade-20210526215018-510955) Calling .GetSSHPort
	I0526 21:52:11.365646  558359 main.go:128] libmachine: (running-upgrade-20210526215018-510955) Calling .GetSSHKeyPath
	I0526 21:52:11.365809  558359 main.go:128] libmachine: (running-upgrade-20210526215018-510955) Calling .GetSSHUsername
	I0526 21:52:11.365968  558359 sshutil.go:53] new ssh client: &{IP:192.168.50.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/running-upgrade-20210526215018-510955/id_rsa Username:docker}
	I0526 21:52:11.366073  558359 main.go:128] libmachine: (running-upgrade-20210526215018-510955) DBG | domain running-upgrade-20210526215018-510955 has defined MAC address 52:54:00:b5:85:87 in network minikube-net
	I0526 21:52:11.366480  558359 main.go:128] libmachine: (running-upgrade-20210526215018-510955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:85:87", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2021-05-26 22:50:51 +0000 UTC Type:0 Mac:52:54:00:b5:85:87 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:running-upgrade-20210526215018-510955 Clientid:01:52:54:00:b5:85:87}
	I0526 21:52:11.366505  558359 main.go:128] libmachine: (running-upgrade-20210526215018-510955) DBG | domain running-upgrade-20210526215018-510955 has defined IP address 192.168.50.63 and MAC address 52:54:00:b5:85:87 in network minikube-net
	I0526 21:52:11.366685  558359 main.go:128] libmachine: (running-upgrade-20210526215018-510955) Calling .GetSSHPort
	I0526 21:52:11.366850  558359 main.go:128] libmachine: (running-upgrade-20210526215018-510955) Calling .GetSSHKeyPath
	I0526 21:52:11.366976  558359 main.go:128] libmachine: (running-upgrade-20210526215018-510955) Calling .GetSSHUsername
	I0526 21:52:11.367090  558359 sshutil.go:53] new ssh client: &{IP:192.168.50.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/running-upgrade-20210526215018-510955/id_rsa Username:docker}
	I0526 21:52:11.455433  558359 preload.go:98] Checking if preload exists for k8s version v1.17.0 and runtime containerd
	W0526 21:52:11.486654  558359 preload.go:119] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.17.0-containerd-overlay2-amd64.tar.lz4 status code: 404
	I0526 21:52:11.486712  558359 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0526 21:52:11.497042  558359 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0526 21:52:11.508614  558359 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0526 21:52:11.524993  558359 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %s "cm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKCltncnBjXQogIGFkZHJlc3MgPSAiL3J1bi9jb250YWluZXJkL2NvbnRhaW5lcmQuc29jayIKICB1aWQgPSAwCiAgZ2lkID0gMAogIG1heF9yZWN2X21lc3NhZ2Vfc2l6ZSA9IDE2Nzc3MjE2CiAgbWF4X3NlbmRfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKCltkZWJ1Z10KICBhZGRyZXNzID0gIiIKICB1aWQgPSAwCiAgZ2lkID0gMAogIGxldmVsID0gIiIKClttZXRyaWNzXQogIGFkZHJlc3MgPSAiIgogIGdycGNfaGlzdG9ncmFtID0gZmFsc2UKCltjZ3JvdXBdCiAgcGF0aCA9ICIiCgpbcGx1Z2luc10KICBbcGx1Z2lucy5jZ3JvdXBzXQogICAgbm9fcHJvbWV0aGV1cyA9IGZhbHNlCiAgW3BsdWdpbnMuY3JpXQogICAgc3RyZWFtX3NlcnZlcl9hZGRyZXNzID0gIiIKICAgIHN0cmVhbV9zZXJ2ZXJfcG9ydCA9ICIxMDAxMCIKICAgIGVuYWJsZV9zZWxpbnV4ID0gZmFsc2UKICAgIHNhbmRib3hfaW1hZ2UgPSAiazhzLmdjci5pby9wYXVzZTozLjEiCiAgICBzdGF0c19jb2xsZWN0X3BlcmlvZCA9IDEwCiAgICBzeXN0ZW1kX2Nncm91cCA9IGZhbHNlCiAgICBlbmFibGVfdGxzX3N0cmVhbWluZyA9IGZhbHNlCiAgICBtYXhfY29udGFpbmVyX2xvZ19saW5lX3NpemUgPSAxNjM
4NAogICAgW3BsdWdpbnMuY3JpLmNvbnRhaW5lcmRdCiAgICAgIHNuYXBzaG90dGVyID0gIm92ZXJsYXlmcyIKICAgICAgW3BsdWdpbnMuY3JpLmNvbnRhaW5lcmQuZGVmYXVsdF9ydW50aW1lXQogICAgICAgIHJ1bnRpbWVfdHlwZSA9ICJpby5jb250YWluZXJkLnJ1bmMudjIiCiAgICAgICAgW3BsdWdpbnMuY3JpLmNvbnRhaW5lcmQuZGVmYXVsdF9ydW50aW1lLm9wdGlvbnNdCiAgICAgICAgICBOb1Bpdm90Um9vdCA9IHRydWUKICAgICAgW3BsdWdpbnMuY3JpLmNvbnRhaW5lcmQudW50cnVzdGVkX3dvcmtsb2FkX3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gIiIKICAgICAgICBydW50aW1lX2VuZ2luZSA9ICIiCiAgICAgICAgcnVudGltZV9yb290ID0gIiIKICAgIFtwbHVnaW5zLmNyaS5jbmldCiAgICAgIGJpbl9kaXIgPSAiL29wdC9jbmkvYmluIgogICAgICBjb25mX2RpciA9ICIvZXRjL2NuaS9uZXQuZCIKICAgICAgY29uZl90ZW1wbGF0ZSA9ICIiCiAgICBbcGx1Z2lucy5jcmkucmVnaXN0cnldCiAgICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeS5taXJyb3JzXQogICAgICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeS5taXJyb3JzLiJkb2NrZXIuaW8iXQogICAgICAgICAgZW5kcG9pbnQgPSBbImh0dHBzOi8vcmVnaXN0cnktMS5kb2NrZXIuaW8iXQogICAgICAgIFtwbHVnaW5zLmRpZmYtc2VydmljZV0KICAgIGRlZmF1bHQgPSBbIndhbGtpbmciXQogIFtwbHVnaW5zLnNjaGVkdWxlcl0KICAgIHBhdXNlX3RocmVzaG9sZCA9IDAuMDI
KICAgIGRlbGV0aW9uX3RocmVzaG9sZCA9IDAKICAgIG11dGF0aW9uX3RocmVzaG9sZCA9IDEwMAogICAgc2NoZWR1bGVfZGVsYXkgPSAiMHMiCiAgICBzdGFydHVwX2RlbGF5ID0gIjEwMG1zIgo=" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0526 21:52:11.552901  558359 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0526 21:52:11.559603  558359 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0526 21:52:11.565685  558359 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0526 21:52:11.676231  558359 ssh_runner.go:149] Run: sudo systemctl restart containerd
	I0526 21:52:11.697836  558359 start.go:376] Will wait 60s for socket path /run/containerd/containerd.sock
	I0526 21:52:11.697892  558359 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0526 21:52:11.707757  558359 retry.go:31] will retry after 1.104660288s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/run/containerd/containerd.sock': No such file or directory
	I0526 21:52:12.813055  558359 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0526 21:52:12.818639  558359 start.go:401] Will wait 60s for crictl version
	I0526 21:52:12.818697  558359 ssh_runner.go:149] Run: sudo crictl version
	I0526 21:52:12.838624  558359 start.go:410] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.2.10
	RuntimeApiVersion:  v1alpha2
	I0526 21:52:12.838696  558359 ssh_runner.go:149] Run: containerd --version
	I0526 21:52:12.878213  558359 out.go:170] * Preparing Kubernetes v1.17.0 on containerd 1.2.10 ...
	I0526 21:52:12.878245  558359 main.go:128] libmachine: (running-upgrade-20210526215018-510955) Calling .GetIP
	I0526 21:52:12.883280  558359 main.go:128] libmachine: (running-upgrade-20210526215018-510955) DBG | domain running-upgrade-20210526215018-510955 has defined MAC address 52:54:00:b5:85:87 in network minikube-net
	I0526 21:52:12.883601  558359 main.go:128] libmachine: (running-upgrade-20210526215018-510955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:85:87", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2021-05-26 22:50:51 +0000 UTC Type:0 Mac:52:54:00:b5:85:87 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:running-upgrade-20210526215018-510955 Clientid:01:52:54:00:b5:85:87}
	I0526 21:52:12.883632  558359 main.go:128] libmachine: (running-upgrade-20210526215018-510955) DBG | domain running-upgrade-20210526215018-510955 has defined IP address 192.168.50.63 and MAC address 52:54:00:b5:85:87 in network minikube-net
	I0526 21:52:12.883843  558359 ssh_runner.go:149] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0526 21:52:12.888287  558359 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0526 21:52:12.898929  558359 localpath.go:92] copying /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/client.crt -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/running-upgrade-20210526215018-510955/client.crt
	I0526 21:52:12.899056  558359 localpath.go:117] copying /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/client.key -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/running-upgrade-20210526215018-510955/client.key
	I0526 21:52:12.899188  558359 preload.go:98] Checking if preload exists for k8s version v1.17.0 and runtime containerd
	W0526 21:52:12.930134  558359 preload.go:119] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.17.0-containerd-overlay2-amd64.tar.lz4 status code: 404
	I0526 21:52:12.930183  558359 ssh_runner.go:149] Run: sudo crictl images --output json
	I0526 21:52:12.946902  558359 containerd.go:566] couldn't find preloaded image for "gcr.io/k8s-minikube/storage-provisioner:v5". assuming images are not preloaded.
	I0526 21:52:12.946920  558359 cache_images.go:78] LoadImages start: [k8s.gcr.io/kube-apiserver:v1.17.0 k8s.gcr.io/kube-controller-manager:v1.17.0 k8s.gcr.io/kube-scheduler:v1.17.0 k8s.gcr.io/kube-proxy:v1.17.0 k8s.gcr.io/pause:3.1 k8s.gcr.io/etcd:3.4.3-0 k8s.gcr.io/coredns:1.6.5 gcr.io/k8s-minikube/storage-provisioner:v5 docker.io/kubernetesui/dashboard:v2.1.0 docker.io/kubernetesui/metrics-scraper:v1.0.4]
	I0526 21:52:12.946977  558359 image.go:162] retrieving image: docker.io/kubernetesui/metrics-scraper:v1.0.4
	I0526 21:52:12.946995  558359 image.go:162] retrieving image: k8s.gcr.io/pause:3.1
	I0526 21:52:12.947029  558359 image.go:162] retrieving image: k8s.gcr.io/kube-scheduler:v1.17.0
	I0526 21:52:12.947056  558359 image.go:162] retrieving image: k8s.gcr.io/kube-proxy:v1.17.0
	I0526 21:52:12.947146  558359 image.go:162] retrieving image: k8s.gcr.io/kube-controller-manager:v1.17.0
	I0526 21:52:12.947173  558359 image.go:162] retrieving image: k8s.gcr.io/coredns:1.6.5
	I0526 21:52:12.947203  558359 image.go:162] retrieving image: k8s.gcr.io/kube-apiserver:v1.17.0
	I0526 21:52:12.947210  558359 image.go:162] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0526 21:52:12.947151  558359 image.go:162] retrieving image: k8s.gcr.io/etcd:3.4.3-0
	I0526 21:52:12.947296  558359 image.go:162] retrieving image: docker.io/kubernetesui/dashboard:v2.1.0
	I0526 21:52:12.974093  558359 image.go:200] found k8s.gcr.io/pause:3.1 locally: &{Image:0xc000146260}
	I0526 21:52:12.974162  558359 ssh_runner.go:149] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/pause:3.1 | grep da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e"
	I0526 21:52:13.565090  558359 cache_images.go:106] "k8s.gcr.io/pause:3.1" needs transfer: "k8s.gcr.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0526 21:52:13.565140  558359 cri.go:205] Removing image: k8s.gcr.io/pause:3.1
	I0526 21:52:13.565187  558359 ssh_runner.go:149] Run: which crictl
	I0526 21:52:13.581107  558359 ssh_runner.go:149] Run: sudo /bin/crictl rmi k8s.gcr.io/pause:3.1
	I0526 21:52:13.681951  558359 image.go:200] found gcr.io/k8s-minikube/storage-provisioner:v5 locally: &{Image:0xc00057e880}
	I0526 21:52:13.682030  558359 ssh_runner.go:149] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep gcr.io/k8s-minikube/storage-provisioner:v5 | grep 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"
	I0526 21:52:13.710207  558359 cache_images.go:279] Loading image from: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/k8s.gcr.io/pause_3.1
	I0526 21:52:13.710317  558359 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.1
	I0526 21:52:13.897865  558359 image.go:200] found index.docker.io/kubernetesui/metrics-scraper:v1.0.4 locally: &{Image:0xc0001463a0}
	I0526 21:52:13.897941  558359 ssh_runner.go:149] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep docker.io/kubernetesui/metrics-scraper:v1.0.4 | grep 86262685d9abb35698a4e03ed13f9ded5b97c6c85b466285e4f367e5232eeee4"
	I0526 21:52:13.993819  558359 image.go:200] found k8s.gcr.io/coredns:1.6.5 locally: &{Image:0xc00057e700}
	I0526 21:52:13.993905  558359 ssh_runner.go:149] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/coredns:1.6.5 | grep 70f311871ae12c14bd0e02028f249f933f925e4370744e4e35f706da773a8f61"
	I0526 21:52:14.501399  558359 ssh_runner.go:310] copy: skipping /var/lib/minikube/images/pause_3.1 (exists)
	I0526 21:52:14.501432  558359 containerd.go:260] Loading image: /var/lib/minikube/images/pause_3.1
	I0526 21:52:14.501488  558359 ssh_runner.go:149] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.1
	I0526 21:52:14.501596  558359 cache_images.go:106] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0526 21:52:14.501633  558359 cri.go:205] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0526 21:52:14.501663  558359 ssh_runner.go:149] Run: which crictl
	I0526 21:52:14.922611  558359 image.go:200] found k8s.gcr.io/kube-scheduler:v1.17.0 locally: &{Image:0xc00022a5c0}
	I0526 21:52:14.922688  558359 ssh_runner.go:149] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-scheduler:v1.17.0 | grep 78c190f736b115876724580513fdf37fa4c3984559dc9e90372b11c21b9cad28"
	I0526 21:52:15.123177  558359 ssh_runner.go:189] Completed: /bin/bash -c "sudo ctr -n=k8s.io images check | grep docker.io/kubernetesui/metrics-scraper:v1.0.4 | grep 86262685d9abb35698a4e03ed13f9ded5b97c6c85b466285e4f367e5232eeee4": (1.225204873s)
	I0526 21:52:15.123231  558359 cache_images.go:106] "docker.io/kubernetesui/metrics-scraper:v1.0.4" needs transfer: "docker.io/kubernetesui/metrics-scraper:v1.0.4" does not exist at hash "86262685d9abb35698a4e03ed13f9ded5b97c6c85b466285e4f367e5232eeee4" in container runtime
	I0526 21:52:15.123267  558359 cri.go:205] Removing image: docker.io/kubernetesui/metrics-scraper:v1.0.4
	I0526 21:52:15.123327  558359 ssh_runner.go:149] Run: which crictl
	I0526 21:52:15.508694  558359 image.go:200] found k8s.gcr.io/kube-proxy:v1.17.0 locally: &{Image:0xc00022aa80}
	I0526 21:52:15.508794  558359 ssh_runner.go:149] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-proxy:v1.17.0 | grep 7d54289267dc5a115f940e8b1ea5c20483a5da5ae5bb3ad80107409ed1400f19"
	I0526 21:52:15.933897  558359 ssh_runner.go:189] Completed: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/coredns:1.6.5 | grep 70f311871ae12c14bd0e02028f249f933f925e4370744e4e35f706da773a8f61": (1.939955942s)
	I0526 21:52:15.933962  558359 cache_images.go:106] "k8s.gcr.io/coredns:1.6.5" needs transfer: "k8s.gcr.io/coredns:1.6.5" does not exist at hash "70f311871ae12c14bd0e02028f249f933f925e4370744e4e35f706da773a8f61" in container runtime
	I0526 21:52:15.933998  558359 cri.go:205] Removing image: k8s.gcr.io/coredns:1.6.5
	I0526 21:52:15.934042  558359 ssh_runner.go:149] Run: which crictl
	I0526 21:52:15.936433  558359 ssh_runner.go:189] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.1: (1.43490265s)
	I0526 21:52:15.936462  558359 cache_images.go:308] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/k8s.gcr.io/pause_3.1 from cache
	I0526 21:52:15.936473  558359 ssh_runner.go:189] Completed: which crictl: (1.434793928s)
	I0526 21:52:15.936548  558359 ssh_runner.go:149] Run: sudo /bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0526 21:52:15.948231  558359 image.go:200] found k8s.gcr.io/kube-apiserver:v1.17.0 locally: &{Image:0xc00022ab80}
	I0526 21:52:15.948292  558359 ssh_runner.go:149] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-apiserver:v1.17.0 | grep 0cae8d5cc64c7d8fbdf73ee2be36c77fdabd9e0c7d30da0c12aedf402730bbb2"
	I0526 21:52:16.038264  558359 image.go:200] found k8s.gcr.io/kube-controller-manager:v1.17.0 locally: &{Image:0xc00057ec60}
	I0526 21:52:16.038361  558359 ssh_runner.go:149] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-controller-manager:v1.17.0 | grep 5eb3b7486872441e0943f6e14e9dd5cc1c70bc3047efacbc43d1aa9b7d5b3056"
	I0526 21:52:16.742354  558359 ssh_runner.go:189] Completed: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-scheduler:v1.17.0 | grep 78c190f736b115876724580513fdf37fa4c3984559dc9e90372b11c21b9cad28": (1.819628979s)
	I0526 21:52:16.742419  558359 cache_images.go:106] "k8s.gcr.io/kube-scheduler:v1.17.0" needs transfer: "k8s.gcr.io/kube-scheduler:v1.17.0" does not exist at hash "78c190f736b115876724580513fdf37fa4c3984559dc9e90372b11c21b9cad28" in container runtime
	I0526 21:52:16.742429  558359 ssh_runner.go:189] Completed: which crictl: (1.619086018s)
	I0526 21:52:16.742462  558359 cri.go:205] Removing image: k8s.gcr.io/kube-scheduler:v1.17.0
	I0526 21:52:16.742496  558359 ssh_runner.go:149] Run: sudo /bin/crictl rmi docker.io/kubernetesui/metrics-scraper:v1.0.4
	I0526 21:52:16.742496  558359 ssh_runner.go:149] Run: which crictl
	I0526 21:52:16.826811  558359 image.go:200] found index.docker.io/kubernetesui/dashboard:v2.1.0 locally: &{Image:0xc00022ae20}
	I0526 21:52:16.826889  558359 ssh_runner.go:149] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep docker.io/kubernetesui/dashboard:v2.1.0 | grep 9a07b5b4bfac07e5cfc27f76c34516a3ad2fdfa3f683f375141fe662ef2e72db"
	I0526 21:52:17.085267  558359 image.go:200] found k8s.gcr.io/etcd:3.4.3-0 locally: &{Image:0xc000688d60}
	I0526 21:52:17.085346  558359 ssh_runner.go:149] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/etcd:3.4.3-0 | grep 303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f"
	I0526 21:52:17.272777  558359 ssh_runner.go:189] Completed: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-proxy:v1.17.0 | grep 7d54289267dc5a115f940e8b1ea5c20483a5da5ae5bb3ad80107409ed1400f19": (1.763942113s)
	I0526 21:52:17.272827  558359 ssh_runner.go:189] Completed: which crictl: (1.338769871s)
	I0526 21:52:17.272837  558359 cache_images.go:106] "k8s.gcr.io/kube-proxy:v1.17.0" needs transfer: "k8s.gcr.io/kube-proxy:v1.17.0" does not exist at hash "7d54289267dc5a115f940e8b1ea5c20483a5da5ae5bb3ad80107409ed1400f19" in container runtime
	I0526 21:52:17.272883  558359 cri.go:205] Removing image: k8s.gcr.io/kube-proxy:v1.17.0
	I0526 21:52:17.272898  558359 ssh_runner.go:149] Run: sudo /bin/crictl rmi k8s.gcr.io/coredns:1.6.5
	I0526 21:52:17.272913  558359 ssh_runner.go:189] Completed: sudo /bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.336346628s)
	I0526 21:52:17.272928  558359 ssh_runner.go:149] Run: which crictl
	I0526 21:52:17.272958  558359 cache_images.go:279] Loading image from: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5
	I0526 21:52:17.273038  558359 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0526 21:52:17.747178  558359 ssh_runner.go:189] Completed: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-controller-manager:v1.17.0 | grep 5eb3b7486872441e0943f6e14e9dd5cc1c70bc3047efacbc43d1aa9b7d5b3056": (1.708791232s)
	I0526 21:52:17.747233  558359 cache_images.go:106] "k8s.gcr.io/kube-controller-manager:v1.17.0" needs transfer: "k8s.gcr.io/kube-controller-manager:v1.17.0" does not exist at hash "5eb3b7486872441e0943f6e14e9dd5cc1c70bc3047efacbc43d1aa9b7d5b3056" in container runtime
	I0526 21:52:17.747270  558359 cri.go:205] Removing image: k8s.gcr.io/kube-controller-manager:v1.17.0
	I0526 21:52:17.747333  558359 ssh_runner.go:149] Run: which crictl
	I0526 21:52:17.747412  558359 ssh_runner.go:189] Completed: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-apiserver:v1.17.0 | grep 0cae8d5cc64c7d8fbdf73ee2be36c77fdabd9e0c7d30da0c12aedf402730bbb2": (1.799107965s)
	I0526 21:52:17.747441  558359 cache_images.go:106] "k8s.gcr.io/kube-apiserver:v1.17.0" needs transfer: "k8s.gcr.io/kube-apiserver:v1.17.0" does not exist at hash "0cae8d5cc64c7d8fbdf73ee2be36c77fdabd9e0c7d30da0c12aedf402730bbb2" in container runtime
	I0526 21:52:17.747464  558359 cri.go:205] Removing image: k8s.gcr.io/kube-apiserver:v1.17.0
	I0526 21:52:17.747491  558359 ssh_runner.go:149] Run: which crictl
	I0526 21:52:17.747581  558359 ssh_runner.go:189] Completed: sudo /bin/crictl rmi docker.io/kubernetesui/metrics-scraper:v1.0.4: (1.005066892s)
	I0526 21:52:17.747619  558359 cache_images.go:279] Loading image from: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4
	I0526 21:52:17.747688  558359 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/metrics-scraper_v1.0.4
	I0526 21:52:17.747767  558359 ssh_runner.go:189] Completed: which crictl: (1.005203398s)
	I0526 21:52:17.747796  558359 ssh_runner.go:149] Run: sudo /bin/crictl rmi k8s.gcr.io/kube-scheduler:v1.17.0
	I0526 21:52:18.062321  558359 ssh_runner.go:189] Completed: /bin/bash -c "sudo ctr -n=k8s.io images check | grep docker.io/kubernetesui/dashboard:v2.1.0 | grep 9a07b5b4bfac07e5cfc27f76c34516a3ad2fdfa3f683f375141fe662ef2e72db": (1.235404635s)
	I0526 21:52:18.062397  558359 cache_images.go:106] "docker.io/kubernetesui/dashboard:v2.1.0" needs transfer: "docker.io/kubernetesui/dashboard:v2.1.0" does not exist at hash "9a07b5b4bfac07e5cfc27f76c34516a3ad2fdfa3f683f375141fe662ef2e72db" in container runtime
	I0526 21:52:18.062441  558359 cri.go:205] Removing image: docker.io/kubernetesui/dashboard:v2.1.0
	I0526 21:52:18.062493  558359 ssh_runner.go:149] Run: which crictl
	I0526 21:52:18.155399  558359 ssh_runner.go:189] Completed: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/etcd:3.4.3-0 | grep 303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f": (1.070035295s)
	I0526 21:52:18.155454  558359 cache_images.go:106] "k8s.gcr.io/etcd:3.4.3-0" needs transfer: "k8s.gcr.io/etcd:3.4.3-0" does not exist at hash "303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f" in container runtime
	I0526 21:52:18.155491  558359 cri.go:205] Removing image: k8s.gcr.io/etcd:3.4.3-0
	I0526 21:52:18.155496  558359 cache_images.go:279] Loading image from: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/k8s.gcr.io/coredns_1.6.5
	I0526 21:52:18.155539  558359 ssh_runner.go:149] Run: which crictl
	I0526 21:52:18.155597  558359 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_1.6.5
	I0526 21:52:18.155618  558359 ssh_runner.go:306] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0526 21:52:18.155649  558359 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (10569216 bytes)
	I0526 21:52:18.155680  558359 ssh_runner.go:149] Run: sudo /bin/crictl rmi k8s.gcr.io/kube-proxy:v1.17.0
	I0526 21:52:18.155716  558359 ssh_runner.go:149] Run: sudo /bin/crictl rmi k8s.gcr.io/kube-controller-manager:v1.17.0
	I0526 21:52:18.155759  558359 ssh_runner.go:149] Run: sudo /bin/crictl rmi k8s.gcr.io/kube-apiserver:v1.17.0
	I0526 21:52:18.155786  558359 ssh_runner.go:306] existence check for /var/lib/minikube/images/metrics-scraper_v1.0.4: stat -c "%s %y" /var/lib/minikube/images/metrics-scraper_v1.0.4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/metrics-scraper_v1.0.4': No such file or directory
	I0526 21:52:18.155807  558359 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4 --> /var/lib/minikube/images/metrics-scraper_v1.0.4 (17437696 bytes)
	I0526 21:52:18.155834  558359 cache_images.go:279] Loading image from: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.17.0
	I0526 21:52:18.155892  558359 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.17.0
	I0526 21:52:18.155897  558359 ssh_runner.go:149] Run: sudo /bin/crictl rmi docker.io/kubernetesui/dashboard:v2.1.0
	I0526 21:52:18.274407  558359 cache_images.go:279] Loading image from: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0
	I0526 21:52:18.274420  558359 ssh_runner.go:310] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.17.0 (exists)
	I0526 21:52:18.274528  558359 containerd.go:260] Loading image: /var/lib/minikube/images/kube-scheduler_v1.17.0
	I0526 21:52:18.274536  558359 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/dashboard_v2.1.0
	I0526 21:52:18.274584  558359 ssh_runner.go:149] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.17.0
	I0526 21:52:18.388344  558359 cache_images.go:279] Loading image from: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.17.0
	I0526 21:52:18.388458  558359 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.17.0
	I0526 21:52:18.388531  558359 cache_images.go:279] Loading image from: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.17.0
	I0526 21:52:18.388590  558359 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.17.0
	I0526 21:52:18.388671  558359 ssh_runner.go:310] copy: skipping /var/lib/minikube/images/coredns_1.6.5 (exists)
	I0526 21:52:18.388695  558359 cache_images.go:279] Loading image from: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.17.0
	I0526 21:52:18.388741  558359 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.17.0
	I0526 21:52:18.388805  558359 ssh_runner.go:149] Run: sudo /bin/crictl rmi k8s.gcr.io/etcd:3.4.3-0
	I0526 21:52:19.384795  558359 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/dashboard_v2.1.0: (1.110231208s)
	I0526 21:52:19.384847  558359 ssh_runner.go:306] existence check for /var/lib/minikube/images/dashboard_v2.1.0: stat -c "%s %y" /var/lib/minikube/images/dashboard_v2.1.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/dashboard_v2.1.0': No such file or directory
	I0526 21:52:19.384894  558359 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0 --> /var/lib/minikube/images/dashboard_v2.1.0 (78078976 bytes)
	I0526 21:52:19.384916  558359 cache_images.go:279] Loading image from: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/k8s.gcr.io/etcd_3.4.3-0
	I0526 21:52:19.384996  558359 ssh_runner.go:310] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.17.0 (exists)
	I0526 21:52:19.385018  558359 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.4.3-0
	I0526 21:52:19.385025  558359 ssh_runner.go:310] copy: skipping /var/lib/minikube/images/kube-proxy_v1.17.0 (exists)
	I0526 21:52:19.385051  558359 ssh_runner.go:310] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.17.0 (exists)
	I0526 21:52:19.385055  558359 ssh_runner.go:189] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.17.0: (1.110447938s)
	I0526 21:52:19.385077  558359 cache_images.go:308] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.17.0 from cache
	I0526 21:52:19.385113  558359 containerd.go:260] Loading image: /var/lib/minikube/images/coredns_1.6.5
	I0526 21:52:19.385163  558359 ssh_runner.go:149] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_1.6.5
	I0526 21:52:19.401592  558359 ssh_runner.go:310] copy: skipping /var/lib/minikube/images/etcd_3.4.3-0 (exists)
	I0526 21:52:29.524668  558359 ssh_runner.go:189] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_1.6.5: (10.139473392s)
	I0526 21:52:29.524711  558359 cache_images.go:308] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/k8s.gcr.io/coredns_1.6.5 from cache
	I0526 21:52:29.524734  558359 containerd.go:260] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0526 21:52:29.524784  558359 ssh_runner.go:149] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I0526 21:52:30.374831  558359 cache_images.go:308] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0526 21:52:30.374866  558359 containerd.go:260] Loading image: /var/lib/minikube/images/metrics-scraper_v1.0.4
	I0526 21:52:30.374916  558359 ssh_runner.go:149] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/metrics-scraper_v1.0.4
	I0526 21:52:31.705407  558359 ssh_runner.go:189] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/metrics-scraper_v1.0.4: (1.330464053s)
	I0526 21:52:31.705449  558359 cache_images.go:308] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4 from cache
	I0526 21:52:31.705479  558359 containerd.go:260] Loading image: /var/lib/minikube/images/kube-apiserver_v1.17.0
	I0526 21:52:31.705541  558359 ssh_runner.go:149] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.17.0
	I0526 21:52:32.905113  558359 ssh_runner.go:189] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.17.0: (1.199548491s)
	I0526 21:52:32.905139  558359 cache_images.go:308] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.17.0 from cache
	I0526 21:52:32.905165  558359 containerd.go:260] Loading image: /var/lib/minikube/images/kube-proxy_v1.17.0
	I0526 21:52:32.905210  558359 ssh_runner.go:149] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.17.0
	I0526 21:52:35.747647  558359 ssh_runner.go:189] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.17.0: (2.842411086s)
	I0526 21:52:35.747678  558359 cache_images.go:308] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.17.0 from cache
	I0526 21:52:35.747709  558359 containerd.go:260] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.17.0
	I0526 21:52:35.747753  558359 ssh_runner.go:149] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.17.0
	I0526 21:52:36.841586  558359 ssh_runner.go:189] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.17.0: (1.093793129s)
	I0526 21:52:36.841620  558359 cache_images.go:308] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.17.0 from cache
	I0526 21:52:36.841646  558359 containerd.go:260] Loading image: /var/lib/minikube/images/etcd_3.4.3-0
	I0526 21:52:36.841707  558359 ssh_runner.go:149] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.4.3-0
	I0526 21:52:39.375327  558359 ssh_runner.go:189] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.4.3-0: (2.533584609s)
	I0526 21:52:39.375361  558359 cache_images.go:308] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/k8s.gcr.io/etcd_3.4.3-0 from cache
	I0526 21:52:39.375387  558359 containerd.go:260] Loading image: /var/lib/minikube/images/dashboard_v2.1.0
	I0526 21:52:39.375426  558359 ssh_runner.go:149] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/dashboard_v2.1.0
	I0526 21:52:44.623324  558359 ssh_runner.go:189] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/dashboard_v2.1.0: (5.247866096s)
	I0526 21:52:44.623372  558359 cache_images.go:308] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0 from cache
	I0526 21:52:44.623407  558359 cache_images.go:113] Successfully loaded all cached images
	I0526 21:52:44.623424  558359 cache_images.go:82] LoadImages completed in 31.676492485s
	I0526 21:52:44.623502  558359 ssh_runner.go:149] Run: sudo crictl info
	I0526 21:52:44.641078  558359 cni.go:93] Creating CNI manager for ""
	I0526 21:52:44.641104  558359 cni.go:142] EnableDefaultCNI is true, recommending bridge
	I0526 21:52:44.641115  558359 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0526 21:52:44.641129  558359 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.63 APIServerPort:8443 KubernetesVersion:v1.17.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-20210526215018-510955 NodeName:running-upgrade-20210526215018-510955 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.63"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.50.63 CgroupDriver:c
groupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0526 21:52:44.641292  558359 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.63
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "running-upgrade-20210526215018-510955"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.63
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.63"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.17.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	
	I0526 21:52:44.641404  558359 kubeadm.go:909] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.17.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=running-upgrade-20210526215018-510955 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.63 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.17.0 ClusterName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:}
	I0526 21:52:44.641468  558359 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.17.0
	I0526 21:52:44.651825  558359 binaries.go:47] Didn't find k8s binaries: didn't find preexisting kubectl
	Initiating transfer...
	I0526 21:52:44.651875  558359 ssh_runner.go:149] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.17.0
	I0526 21:52:44.660743  558359 binary.go:65] Not caching binary, using https://storage.googleapis.com/kubernetes-release/release/v1.17.0/bin/linux/amd64/kubelet?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.17.0/bin/linux/amd64/kubelet.sha256
	I0526 21:52:44.660810  558359 binary.go:65] Not caching binary, using https://storage.googleapis.com/kubernetes-release/release/v1.17.0/bin/linux/amd64/kubeadm?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.17.0/bin/linux/amd64/kubeadm.sha256
	I0526 21:52:44.660803  558359 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0526 21:52:44.660743  558359 binary.go:65] Not caching binary, using https://storage.googleapis.com/kubernetes-release/release/v1.17.0/bin/linux/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.17.0/bin/linux/amd64/kubectl.sha256
	I0526 21:52:44.660908  558359 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.17.0/kubeadm
	I0526 21:52:44.660995  558359 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.17.0/kubectl
	I0526 21:52:44.677733  558359 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/linux/v1.17.0/kubeadm --> /var/lib/minikube/binaries/v1.17.0/kubeadm (39342080 bytes)
	I0526 21:52:44.677795  558359 ssh_runner.go:306] existence check for /var/lib/minikube/binaries/v1.17.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.17.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/binaries/v1.17.0/kubectl': No such file or directory
	I0526 21:52:44.677826  558359 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/linux/v1.17.0/kubectl --> /var/lib/minikube/binaries/v1.17.0/kubectl (43495424 bytes)
	I0526 21:52:44.677868  558359 ssh_runner.go:149] Run: sudo systemctl stop -f kubelet
	I0526 21:52:44.714186  558359 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.17.0/kubelet
	I0526 21:52:44.808335  558359 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/linux/v1.17.0/kubelet --> /var/lib/minikube/binaries/v1.17.0/kubelet (111560216 bytes)
	I0526 21:52:45.834355  558359 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0526 21:52:45.842483  558359 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (552 bytes)
	I0526 21:52:45.854544  558359 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0526 21:52:45.866371  558359 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1906 bytes)
	I0526 21:52:45.878896  558359 ssh_runner.go:149] Run: grep 192.168.50.63	control-plane.minikube.internal$ /etc/hosts
	I0526 21:52:45.884224  558359 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.63	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0526 21:52:45.896427  558359 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles for IP: 192.168.50.63
	I0526 21:52:45.896487  558359 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/ca.key
	I0526 21:52:45.896509  558359 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/proxy-client-ca.key
	I0526 21:52:45.896561  558359 localpath.go:92] copying /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/client.crt -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/client.crt
	I0526 21:52:45.896679  558359 localpath.go:117] copying /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/client.key -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/client.key
	I0526 21:52:45.896828  558359 certs.go:290] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/client.key
	I0526 21:52:45.896854  558359 certs.go:294] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/apiserver.key.1519f60b
	I0526 21:52:45.896884  558359 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/apiserver.crt.1519f60b with IP's: [192.168.50.63 10.96.0.1 127.0.0.1 10.0.0.1]
	I0526 21:52:46.109744  558359 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/apiserver.crt.1519f60b ...
	I0526 21:52:46.109775  558359 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/apiserver.crt.1519f60b: {Name:mk0623e8e9461ffc91769637b02b34fb8a4c0bc3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0526 21:52:46.109963  558359 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/apiserver.key.1519f60b ...
	I0526 21:52:46.109975  558359 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/apiserver.key.1519f60b: {Name:mkcb7b9c3852f93af34986e32fe923204db8ec69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0526 21:52:46.110052  558359 certs.go:305] copying /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/apiserver.crt.1519f60b -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/apiserver.crt
	I0526 21:52:46.110119  558359 certs.go:309] copying /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/apiserver.key.1519f60b -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/apiserver.key
	I0526 21:52:46.110169  558359 certs.go:294] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/proxy-client.key
	I0526 21:52:46.110178  558359 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/proxy-client.crt with IP's: []
	I0526 21:52:46.308332  558359 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/proxy-client.crt ...
	I0526 21:52:46.308368  558359 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/proxy-client.crt: {Name:mk96c2f3dfd85f9ca997187ad57045b17c447901 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0526 21:52:46.308571  558359 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/proxy-client.key ...
	I0526 21:52:46.308586  558359 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/proxy-client.key: {Name:mk7ba8ff7cc2810acd6248a79891d28f61a5098a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0526 21:52:46.308774  558359 certs.go:369] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/510955.pem (1338 bytes)
	W0526 21:52:46.308820  558359 certs.go:365] ignoring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/510955_empty.pem, impossibly tiny 0 bytes
	I0526 21:52:46.308841  558359 certs.go:369] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/ca-key.pem (1675 bytes)
	I0526 21:52:46.308891  558359 certs.go:369] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/ca.pem (1078 bytes)
	I0526 21:52:46.308921  558359 certs.go:369] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/cert.pem (1123 bytes)
	I0526 21:52:46.308993  558359 certs.go:369] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/key.pem (1679 bytes)
	I0526 21:52:46.309924  558359 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0526 21:52:46.328311  558359 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0526 21:52:46.344534  558359 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0526 21:52:46.360936  558359 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0526 21:52:46.379283  558359 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0526 21:52:46.395999  558359 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0526 21:52:46.412948  558359 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0526 21:52:46.430801  558359 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0526 21:52:46.447654  558359 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/510955.pem --> /usr/share/ca-certificates/510955.pem (1338 bytes)
	I0526 21:52:46.464389  558359 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0526 21:52:46.483388  558359 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (774 bytes)
	I0526 21:52:46.497464  558359 ssh_runner.go:149] Run: openssl version
	I0526 21:52:46.504994  558359 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/510955.pem && ln -fs /usr/share/ca-certificates/510955.pem /etc/ssl/certs/510955.pem"
	I0526 21:52:46.515345  558359 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/510955.pem
	I0526 21:52:46.521956  558359 certs.go:410] hashing: -rw-r--r-- 1 root root 1338 May 26 21:12 /usr/share/ca-certificates/510955.pem
	I0526 21:52:46.522005  558359 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/510955.pem
	I0526 21:52:46.535022  558359 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/510955.pem /etc/ssl/certs/51391683.0"
	I0526 21:52:46.542413  558359 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0526 21:52:46.552187  558359 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0526 21:52:46.557367  558359 certs.go:410] hashing: -rw-r--r-- 1 root root 1111 May 26 20:40 /usr/share/ca-certificates/minikubeCA.pem
	I0526 21:52:46.557413  558359 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0526 21:52:46.569712  558359 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0526 21:52:46.576572  558359 kubeadm.go:390] StartCluster: {Name:running-upgrade-20210526215018-510955 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.20.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.22-1620785771-11384@sha256:f5844fe35994179bbad8dda27d4912304a2fedccdf0bf93ce8b2ec2b3b83af1c Memory:2200 CPUs:2 DiskSize:20000 VMDriver:kvm2 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName: Namespace:
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.50.63 Port:8443 KubernetesVersion:v1.17.0 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	I0526 21:52:46.576644  558359 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0526 21:52:46.576680  558359 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0526 21:52:46.612310  558359 cri.go:76] found id: "a15a1724ee7df22edc8ef24287d9832227b7ea84fa38b502218aca961700fd1f"
	I0526 21:52:46.612353  558359 cri.go:76] found id: "2aa93735f456937dd6bc96ddfe4320c6be1c44f039abe2a485527ad8b14ace67"
	I0526 21:52:46.612360  558359 cri.go:76] found id: "bb242ecc00001cff1824580784deb01c6723565ab3026f6247e52b0fd370d739"
	I0526 21:52:46.612366  558359 cri.go:76] found id: "0dd6df36a3eee7921165299f6e31311d260b52d58be6d96b91e3d1899e4035ec"
	I0526 21:52:46.612370  558359 cri.go:76] found id: "90c11817db7987a6727ef194579fbed7e4554ee86e9e61b738b0b174c91a4554"
	I0526 21:52:46.612377  558359 cri.go:76] found id: ""
	I0526 21:52:46.612440  558359 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0526 21:52:46.643928  558359 cri.go:103] JSON = [{"ociVersion":"1.0.1-dev","id":"0dd6df36a3eee7921165299f6e31311d260b52d58be6d96b91e3d1899e4035ec","pid":3204,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v1.linux/k8s.io/0dd6df36a3eee7921165299f6e31311d260b52d58be6d96b91e3d1899e4035ec","rootfs":"/run/containerd/io.containerd.runtime.v1.linux/k8s.io/0dd6df36a3eee7921165299f6e31311d260b52d58be6d96b91e3d1899e4035ec/rootfs","created":"2021-05-26T21:51:49.857411485Z","annotations":{"io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"fb02a42fb9c105fbbf33147952bc863e034f40a4fb263d72fa94496875470e89"},"owner":"root"},{"ociVersion":"1.0.1-dev","id":"2aa93735f456937dd6bc96ddfe4320c6be1c44f039abe2a485527ad8b14ace67","pid":3232,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v1.linux/k8s.io/2aa93735f456937dd6bc96ddfe4320c6be1c44f039abe2a485527ad8b14ace67","rootfs":"/run/containerd/io.containerd.runtime.v1.linux/k8s.io/2aa93735f456937dd6bc96ddfe43
20c6be1c44f039abe2a485527ad8b14ace67/rootfs","created":"2021-05-26T21:51:49.882399412Z","annotations":{"io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"e8802b86763a34bfd26f4fafd2526962f6199d84856241229db70a58f3dc8a94"},"owner":"root"},{"ociVersion":"1.0.1-dev","id":"90c11817db7987a6727ef194579fbed7e4554ee86e9e61b738b0b174c91a4554","pid":3091,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v1.linux/k8s.io/90c11817db7987a6727ef194579fbed7e4554ee86e9e61b738b0b174c91a4554","rootfs":"/run/containerd/io.containerd.runtime.v1.linux/k8s.io/90c11817db7987a6727ef194579fbed7e4554ee86e9e61b738b0b174c91a4554/rootfs","created":"2021-05-26T21:51:49.633379301Z","annotations":{"io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"bcb885b0196a3fe552946bf24a6afc3b6640786ac6d80c5918f94a0b3bfd560e"},"owner":"root"},{"ociVersion":"1.0.1-dev","id":"9bdaad23a26d0d6ce54b43a7aec95a56b240f5a6da48b653f5eecc81ec238164","pid":2913,"status":"running","bundle":"/run/c
ontainerd/io.containerd.runtime.v1.linux/k8s.io/9bdaad23a26d0d6ce54b43a7aec95a56b240f5a6da48b653f5eecc81ec238164","rootfs":"/run/containerd/io.containerd.runtime.v1.linux/k8s.io/9bdaad23a26d0d6ce54b43a7aec95a56b240f5a6da48b653f5eecc81ec238164/rootfs","created":"2021-05-26T21:51:49.200907031Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"9bdaad23a26d0d6ce54b43a7aec95a56b240f5a6da48b653f5eecc81ec238164","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-minikube_52d3bd95fc93c917a5a0edc6e16385fc"},"owner":"root"},{"ociVersion":"1.0.1-dev","id":"a15a1724ee7df22edc8ef24287d9832227b7ea84fa38b502218aca961700fd1f","pid":3221,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v1.linux/k8s.io/a15a1724ee7df22edc8ef24287d9832227b7ea84fa38b502218aca961700fd1f","rootfs":"/run/containerd/io.containerd.runtime.v1.linux/k8s.io/a15a1724ee7df22edc8ef24287d9832227b7ea84fa38b502218aca961700fd1f/rootfs","created":"2021-05-26T21:51:49.87
7689693Z","annotations":{"io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"bdf8b923a7fd880e8edf75961618e96fd656a1283b98db05f36ddb5d7210f8fb"},"owner":"root"},{"ociVersion":"1.0.1-dev","id":"bb242ecc00001cff1824580784deb01c6723565ab3026f6247e52b0fd370d739","pid":3192,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v1.linux/k8s.io/bb242ecc00001cff1824580784deb01c6723565ab3026f6247e52b0fd370d739","rootfs":"/run/containerd/io.containerd.runtime.v1.linux/k8s.io/bb242ecc00001cff1824580784deb01c6723565ab3026f6247e52b0fd370d739/rootfs","created":"2021-05-26T21:51:49.795315573Z","annotations":{"io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"9bdaad23a26d0d6ce54b43a7aec95a56b240f5a6da48b653f5eecc81ec238164"},"owner":"root"},{"ociVersion":"1.0.1-dev","id":"bcb885b0196a3fe552946bf24a6afc3b6640786ac6d80c5918f94a0b3bfd560e","pid":2866,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v1.linux/k8s.io/bcb885b0196a3fe552946bf24a6afc
3b6640786ac6d80c5918f94a0b3bfd560e","rootfs":"/run/containerd/io.containerd.runtime.v1.linux/k8s.io/bcb885b0196a3fe552946bf24a6afc3b6640786ac6d80c5918f94a0b3bfd560e/rootfs","created":"2021-05-26T21:51:49.09310902Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"bcb885b0196a3fe552946bf24a6afc3b6640786ac6d80c5918f94a0b3bfd560e","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-addon-manager-minikube_c3e29047da86ce6690916750ab69c40b"},"owner":"root"},{"ociVersion":"1.0.1-dev","id":"bdf8b923a7fd880e8edf75961618e96fd656a1283b98db05f36ddb5d7210f8fb","pid":2949,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v1.linux/k8s.io/bdf8b923a7fd880e8edf75961618e96fd656a1283b98db05f36ddb5d7210f8fb","rootfs":"/run/containerd/io.containerd.runtime.v1.linux/k8s.io/bdf8b923a7fd880e8edf75961618e96fd656a1283b98db05f36ddb5d7210f8fb/rootfs","created":"2021-05-26T21:51:49.284407724Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.k
ubernetes.cri.sandbox-id":"bdf8b923a7fd880e8edf75961618e96fd656a1283b98db05f36ddb5d7210f8fb","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-minikube_e7ce3a6ee9fa0ec547ac7b4b17af0dcb"},"owner":"root"},{"ociVersion":"1.0.1-dev","id":"e8802b86763a34bfd26f4fafd2526962f6199d84856241229db70a58f3dc8a94","pid":2925,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v1.linux/k8s.io/e8802b86763a34bfd26f4fafd2526962f6199d84856241229db70a58f3dc8a94","rootfs":"/run/containerd/io.containerd.runtime.v1.linux/k8s.io/e8802b86763a34bfd26f4fafd2526962f6199d84856241229db70a58f3dc8a94/rootfs","created":"2021-05-26T21:51:49.209172509Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"e8802b86763a34bfd26f4fafd2526962f6199d84856241229db70a58f3dc8a94","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-minikube_ff67867321338ffd885039e188f6b424"},"owner":"root"},{"ociVersion":"1.0.1-dev","id":"fb02a4
2fb9c105fbbf33147952bc863e034f40a4fb263d72fa94496875470e89","pid":2889,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v1.linux/k8s.io/fb02a42fb9c105fbbf33147952bc863e034f40a4fb263d72fa94496875470e89","rootfs":"/run/containerd/io.containerd.runtime.v1.linux/k8s.io/fb02a42fb9c105fbbf33147952bc863e034f40a4fb263d72fa94496875470e89/rootfs","created":"2021-05-26T21:51:49.138520062Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"fb02a42fb9c105fbbf33147952bc863e034f40a4fb263d72fa94496875470e89","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-minikube_30361c75aec6e4a0368a8d1527c20ae1"},"owner":"root"}]
	I0526 21:52:46.644116  558359 cri.go:113] list returned 10 containers
	I0526 21:52:46.644136  558359 cri.go:116] container: {ID:0dd6df36a3eee7921165299f6e31311d260b52d58be6d96b91e3d1899e4035ec Status:running}
	I0526 21:52:46.644152  558359 cri.go:122] skipping {0dd6df36a3eee7921165299f6e31311d260b52d58be6d96b91e3d1899e4035ec running}: state = "running", want "paused"
	I0526 21:52:46.644169  558359 cri.go:116] container: {ID:2aa93735f456937dd6bc96ddfe4320c6be1c44f039abe2a485527ad8b14ace67 Status:running}
	I0526 21:52:46.644176  558359 cri.go:122] skipping {2aa93735f456937dd6bc96ddfe4320c6be1c44f039abe2a485527ad8b14ace67 running}: state = "running", want "paused"
	I0526 21:52:46.644185  558359 cri.go:116] container: {ID:90c11817db7987a6727ef194579fbed7e4554ee86e9e61b738b0b174c91a4554 Status:running}
	I0526 21:52:46.644192  558359 cri.go:122] skipping {90c11817db7987a6727ef194579fbed7e4554ee86e9e61b738b0b174c91a4554 running}: state = "running", want "paused"
	I0526 21:52:46.644203  558359 cri.go:116] container: {ID:9bdaad23a26d0d6ce54b43a7aec95a56b240f5a6da48b653f5eecc81ec238164 Status:running}
	I0526 21:52:46.644210  558359 cri.go:118] skipping 9bdaad23a26d0d6ce54b43a7aec95a56b240f5a6da48b653f5eecc81ec238164 - not in ps
	I0526 21:52:46.644217  558359 cri.go:116] container: {ID:a15a1724ee7df22edc8ef24287d9832227b7ea84fa38b502218aca961700fd1f Status:running}
	I0526 21:52:46.644224  558359 cri.go:122] skipping {a15a1724ee7df22edc8ef24287d9832227b7ea84fa38b502218aca961700fd1f running}: state = "running", want "paused"
	I0526 21:52:46.644232  558359 cri.go:116] container: {ID:bb242ecc00001cff1824580784deb01c6723565ab3026f6247e52b0fd370d739 Status:running}
	I0526 21:52:46.644239  558359 cri.go:122] skipping {bb242ecc00001cff1824580784deb01c6723565ab3026f6247e52b0fd370d739 running}: state = "running", want "paused"
	I0526 21:52:46.644248  558359 cri.go:116] container: {ID:bcb885b0196a3fe552946bf24a6afc3b6640786ac6d80c5918f94a0b3bfd560e Status:running}
	I0526 21:52:46.644255  558359 cri.go:118] skipping bcb885b0196a3fe552946bf24a6afc3b6640786ac6d80c5918f94a0b3bfd560e - not in ps
	I0526 21:52:46.644263  558359 cri.go:116] container: {ID:bdf8b923a7fd880e8edf75961618e96fd656a1283b98db05f36ddb5d7210f8fb Status:running}
	I0526 21:52:46.644270  558359 cri.go:118] skipping bdf8b923a7fd880e8edf75961618e96fd656a1283b98db05f36ddb5d7210f8fb - not in ps
	I0526 21:52:46.644278  558359 cri.go:116] container: {ID:e8802b86763a34bfd26f4fafd2526962f6199d84856241229db70a58f3dc8a94 Status:running}
	I0526 21:52:46.644284  558359 cri.go:118] skipping e8802b86763a34bfd26f4fafd2526962f6199d84856241229db70a58f3dc8a94 - not in ps
	I0526 21:52:46.644291  558359 cri.go:116] container: {ID:fb02a42fb9c105fbbf33147952bc863e034f40a4fb263d72fa94496875470e89 Status:running}
	I0526 21:52:46.644297  558359 cri.go:118] skipping fb02a42fb9c105fbbf33147952bc863e034f40a4fb263d72fa94496875470e89 - not in ps
	I0526 21:52:46.644346  558359 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0526 21:52:46.652275  558359 kubeadm.go:401] found existing configuration files, will attempt cluster restart
	I0526 21:52:46.652293  558359 kubeadm.go:600] restartCluster start
	I0526 21:52:46.652349  558359 ssh_runner.go:149] Run: sudo test -d /data/minikube
	I0526 21:52:46.659880  558359 kubeadm.go:126] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0526 21:52:46.660474  558359 kubeconfig.go:117] verify returned: extract IP: "running-upgrade-20210526215018-510955" does not appear in /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/kubeconfig
	I0526 21:52:46.660580  558359 kubeconfig.go:128] "running-upgrade-20210526215018-510955" context is missing from /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/kubeconfig - will repair!
	I0526 21:52:46.660941  558359 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/kubeconfig: {Name:mk1cc7fc8b8e5fab9f3b22f1113879e2241e6726 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0526 21:52:46.661756  558359 kapi.go:59] client config for running-upgrade-20210526215018-510955: &rest.Config{Host:"https://192.168.50.63:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/running-upgrade-20210526215018-510955/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/
running-upgrade-20210526215018-510955/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16ac600), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0526 21:52:46.663279  558359 ssh_runner.go:149] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0526 21:52:46.670328  558359 kubeadm.go:568] needs reconfigure: configs differ:
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -1,4 +1,4 @@
	-apiVersion: kubeadm.k8s.io/v1beta1
	+apiVersion: kubeadm.k8s.io/v1beta2
	 kind: InitConfiguration
	 localAPIEndpoint:
	   advertiseAddress: 192.168.50.63
	@@ -12,32 +12,59 @@
	       - authentication
	 nodeRegistration:
	   criSocket: /run/containerd/containerd.sock
	-  name: minikube
	+  name: "running-upgrade-20210526215018-510955"
	+  kubeletExtraArgs:
	+    node-ip: 192.168.50.63
	   taints: []
	 ---
	-apiVersion: kubeadm.k8s.io/v1beta1
	+apiVersion: kubeadm.k8s.io/v1beta2
	 kind: ClusterConfiguration
	 apiServer:
	+  certSANs: ["127.0.0.1", "localhost", "192.168.50.63"]
	   extraArgs:
	     enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+controllerManager:
	+  extraArgs:
	+    allocate-node-cidrs: "true"
	+    leader-elect: "false"
	+scheduler:
	+  extraArgs:
	+    leader-elect: "false"
	 certificatesDir: /var/lib/minikube/certs
	-clusterName: kubernetes
	-controlPlaneEndpoint: localhost:8443
	+clusterName: mk
	+controlPlaneEndpoint: control-plane.minikube.internal:8443
	 dns:
	   type: CoreDNS
	 etcd:
	   local:
	     dataDir: /var/lib/minikube/etcd
	+    extraArgs:
	+      proxy-refresh-interval: "70000"
	 kubernetesVersion: v1.17.0
	 networking:
	   dnsDomain: cluster.local
	-  podSubnet: ""
	+  podSubnet: "10.244.0.0/16"
	   serviceSubnet: 10.96.0.0/12
	 ---
	 apiVersion: kubelet.config.k8s.io/v1beta1
	 kind: KubeletConfiguration
	+authentication:
	+  x509:
	+    clientCAFile: /var/lib/minikube/certs/ca.crt
	+cgroupDriver: cgroupfs
	+clusterDomain: "cluster.local"
	+# disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	 evictionHard:
	   nodefs.available: "0%"
	   nodefs.inodesFree: "0%"
	   imagefs.available: "0%"
	+failSwapOn: false
	+staticPodPath: /etc/kubernetes/manifests
	+---
	+apiVersion: kubeproxy.config.k8s.io/v1alpha1
	+kind: KubeProxyConfiguration
	+clusterCIDR: "10.244.0.0/16"
	+metricsBindAddress: 0.0.0.0:10249
	+conntrack:
	+  maxPerCore: 0
	
	-- /stdout --
	I0526 21:52:46.670349  558359 kubeadm.go:1032] stopping kube-system containers ...
	I0526 21:52:46.670362  558359 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0526 21:52:46.670405  558359 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0526 21:52:46.688789  558359 cri.go:76] found id: "a15a1724ee7df22edc8ef24287d9832227b7ea84fa38b502218aca961700fd1f"
	I0526 21:52:46.688812  558359 cri.go:76] found id: "2aa93735f456937dd6bc96ddfe4320c6be1c44f039abe2a485527ad8b14ace67"
	I0526 21:52:46.688818  558359 cri.go:76] found id: "bb242ecc00001cff1824580784deb01c6723565ab3026f6247e52b0fd370d739"
	I0526 21:52:46.688838  558359 cri.go:76] found id: "0dd6df36a3eee7921165299f6e31311d260b52d58be6d96b91e3d1899e4035ec"
	I0526 21:52:46.688845  558359 cri.go:76] found id: "90c11817db7987a6727ef194579fbed7e4554ee86e9e61b738b0b174c91a4554"
	I0526 21:52:46.688854  558359 cri.go:76] found id: ""
	I0526 21:52:46.688874  558359 cri.go:221] Stopping containers: [a15a1724ee7df22edc8ef24287d9832227b7ea84fa38b502218aca961700fd1f 2aa93735f456937dd6bc96ddfe4320c6be1c44f039abe2a485527ad8b14ace67 bb242ecc00001cff1824580784deb01c6723565ab3026f6247e52b0fd370d739 0dd6df36a3eee7921165299f6e31311d260b52d58be6d96b91e3d1899e4035ec 90c11817db7987a6727ef194579fbed7e4554ee86e9e61b738b0b174c91a4554]
	I0526 21:52:46.688920  558359 ssh_runner.go:149] Run: which crictl
	I0526 21:52:46.695020  558359 ssh_runner.go:149] Run: sudo /bin/crictl stop a15a1724ee7df22edc8ef24287d9832227b7ea84fa38b502218aca961700fd1f 2aa93735f456937dd6bc96ddfe4320c6be1c44f039abe2a485527ad8b14ace67 bb242ecc00001cff1824580784deb01c6723565ab3026f6247e52b0fd370d739 0dd6df36a3eee7921165299f6e31311d260b52d58be6d96b91e3d1899e4035ec 90c11817db7987a6727ef194579fbed7e4554ee86e9e61b738b0b174c91a4554
	I0526 21:52:57.548964  558359 ssh_runner.go:189] Completed: sudo /bin/crictl stop a15a1724ee7df22edc8ef24287d9832227b7ea84fa38b502218aca961700fd1f 2aa93735f456937dd6bc96ddfe4320c6be1c44f039abe2a485527ad8b14ace67 bb242ecc00001cff1824580784deb01c6723565ab3026f6247e52b0fd370d739 0dd6df36a3eee7921165299f6e31311d260b52d58be6d96b91e3d1899e4035ec 90c11817db7987a6727ef194579fbed7e4554ee86e9e61b738b0b174c91a4554: (10.853893269s)
	I0526 21:52:57.549042  558359 ssh_runner.go:149] Run: sudo systemctl stop kubelet
	I0526 21:52:57.562029  558359 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0526 21:52:57.569902  558359 kubeadm.go:154] found existing configuration files:
	-rw------- 1 root root 5621 May 26 21:51 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5661 May 26 21:51 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1981 May 26 21:52 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5609 May 26 21:51 /etc/kubernetes/scheduler.conf
	
	I0526 21:52:57.569953  558359 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0526 21:52:57.576060  558359 kubeadm.go:165] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0526 21:52:57.576110  558359 ssh_runner.go:149] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0526 21:52:57.583187  558359 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0526 21:52:57.588904  558359 kubeadm.go:165] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0526 21:52:57.588945  558359 ssh_runner.go:149] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0526 21:52:57.597330  558359 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0526 21:52:57.603610  558359 kubeadm.go:165] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0526 21:52:57.603654  558359 ssh_runner.go:149] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0526 21:52:57.610342  558359 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0526 21:52:57.616638  558359 kubeadm.go:165] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0526 21:52:57.616683  558359 ssh_runner.go:149] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0526 21:52:57.624503  558359 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0526 21:52:57.630574  558359 kubeadm.go:676] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0526 21:52:57.630595  558359 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.17.0:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0526 21:52:57.717993  558359 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.17.0:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0526 21:52:58.410748  558359 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.17.0:$PATH kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0526 21:52:58.644157  558359 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.17.0:$PATH kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0526 21:52:58.742492  558359 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.17.0:$PATH kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0526 21:52:58.865481  558359 api_server.go:50] waiting for apiserver process to appear ...
	I0526 21:52:58.865555  558359 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0526 21:52:59.377533  558359 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0526 21:52:59.877334  558359 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0526 21:52:59.890229  558359 api_server.go:70] duration metric: took 1.024745607s to wait for apiserver process to appear ...
	I0526 21:52:59.890256  558359 api_server.go:86] waiting for apiserver healthz status ...
	I0526 21:52:59.890268  558359 api_server.go:223] Checking apiserver healthz at https://192.168.50.63:8443/healthz ...
	I0526 21:52:59.890842  558359 api_server.go:239] stopped: https://192.168.50.63:8443/healthz: Get "https://192.168.50.63:8443/healthz": dial tcp 192.168.50.63:8443: connect: connection refused
	I0526 21:53:00.391041  558359 api_server.go:223] Checking apiserver healthz at https://192.168.50.63:8443/healthz ...
	I0526 21:53:20.228462  558359 api_server.go:239] stopped: https://192.168.50.63:8443/healthz: Get "https://192.168.50.63:8443/healthz": read tcp 192.168.50.1:58284->192.168.50.63:8443: read: connection reset by peer
	I0526 21:53:20.391722  558359 api_server.go:223] Checking apiserver healthz at https://192.168.50.63:8443/healthz ...
	I0526 21:53:20.392286  558359 api_server.go:239] stopped: https://192.168.50.63:8443/healthz: Get "https://192.168.50.63:8443/healthz": dial tcp 192.168.50.63:8443: connect: connection refused
	I0526 21:53:20.891366  558359 api_server.go:223] Checking apiserver healthz at https://192.168.50.63:8443/healthz ...
	I0526 21:53:20.891942  558359 api_server.go:239] stopped: https://192.168.50.63:8443/healthz: Get "https://192.168.50.63:8443/healthz": dial tcp 192.168.50.63:8443: connect: connection refused
	I0526 21:53:21.391652  558359 api_server.go:223] Checking apiserver healthz at https://192.168.50.63:8443/healthz ...
	I0526 21:53:21.392328  558359 api_server.go:239] stopped: https://192.168.50.63:8443/healthz: Get "https://192.168.50.63:8443/healthz": dial tcp 192.168.50.63:8443: connect: connection refused
	I0526 21:53:21.891038  558359 api_server.go:223] Checking apiserver healthz at https://192.168.50.63:8443/healthz ...
	I0526 21:53:21.891620  558359 api_server.go:239] stopped: https://192.168.50.63:8443/healthz: Get "https://192.168.50.63:8443/healthz": dial tcp 192.168.50.63:8443: connect: connection refused
	I0526 21:53:22.391404  558359 api_server.go:223] Checking apiserver healthz at https://192.168.50.63:8443/healthz ...
	I0526 21:53:22.391988  558359 api_server.go:239] stopped: https://192.168.50.63:8443/healthz: Get "https://192.168.50.63:8443/healthz": dial tcp 192.168.50.63:8443: connect: connection refused
	I0526 21:53:22.891653  558359 api_server.go:223] Checking apiserver healthz at https://192.168.50.63:8443/healthz ...
	I0526 21:53:22.892335  558359 api_server.go:239] stopped: https://192.168.50.63:8443/healthz: Get "https://192.168.50.63:8443/healthz": dial tcp 192.168.50.63:8443: connect: connection refused
	I0526 21:53:23.391075  558359 api_server.go:223] Checking apiserver healthz at https://192.168.50.63:8443/healthz ...
	I0526 21:53:23.391855  558359 api_server.go:239] stopped: https://192.168.50.63:8443/healthz: Get "https://192.168.50.63:8443/healthz": dial tcp 192.168.50.63:8443: connect: connection refused
	I0526 21:53:23.891607  558359 api_server.go:223] Checking apiserver healthz at https://192.168.50.63:8443/healthz ...
	I0526 21:53:23.892293  558359 api_server.go:239] stopped: https://192.168.50.63:8443/healthz: Get "https://192.168.50.63:8443/healthz": dial tcp 192.168.50.63:8443: connect: connection refused
	I0526 21:53:24.391025  558359 api_server.go:223] Checking apiserver healthz at https://192.168.50.63:8443/healthz ...
	I0526 21:53:24.391643  558359 api_server.go:239] stopped: https://192.168.50.63:8443/healthz: Get "https://192.168.50.63:8443/healthz": dial tcp 192.168.50.63:8443: connect: connection refused
	I0526 21:53:24.892020  558359 api_server.go:223] Checking apiserver healthz at https://192.168.50.63:8443/healthz ...
	I0526 21:53:24.892701  558359 api_server.go:239] stopped: https://192.168.50.63:8443/healthz: Get "https://192.168.50.63:8443/healthz": dial tcp 192.168.50.63:8443: connect: connection refused
	I0526 21:53:25.391949  558359 api_server.go:223] Checking apiserver healthz at https://192.168.50.63:8443/healthz ...
	I0526 21:53:25.392642  558359 api_server.go:239] stopped: https://192.168.50.63:8443/healthz: Get "https://192.168.50.63:8443/healthz": dial tcp 192.168.50.63:8443: connect: connection refused
	I0526 21:53:25.891797  558359 api_server.go:223] Checking apiserver healthz at https://192.168.50.63:8443/healthz ...
	I0526 21:53:25.892455  558359 api_server.go:239] stopped: https://192.168.50.63:8443/healthz: Get "https://192.168.50.63:8443/healthz": dial tcp 192.168.50.63:8443: connect: connection refused
	I0526 21:53:26.391209  558359 api_server.go:223] Checking apiserver healthz at https://192.168.50.63:8443/healthz ...
	I0526 21:53:26.391908  558359 api_server.go:239] stopped: https://192.168.50.63:8443/healthz: Get "https://192.168.50.63:8443/healthz": dial tcp 192.168.50.63:8443: connect: connection refused
	I0526 21:53:26.891706  558359 api_server.go:223] Checking apiserver healthz at https://192.168.50.63:8443/healthz ...
	I0526 21:53:26.892315  558359 api_server.go:239] stopped: https://192.168.50.63:8443/healthz: Get "https://192.168.50.63:8443/healthz": dial tcp 192.168.50.63:8443: connect: connection refused
	I0526 21:53:27.391036  558359 api_server.go:223] Checking apiserver healthz at https://192.168.50.63:8443/healthz ...
	I0526 21:53:27.391712  558359 api_server.go:239] stopped: https://192.168.50.63:8443/healthz: Get "https://192.168.50.63:8443/healthz": dial tcp 192.168.50.63:8443: connect: connection refused
	I0526 21:53:27.891477  558359 api_server.go:223] Checking apiserver healthz at https://192.168.50.63:8443/healthz ...
	I0526 21:53:27.892050  558359 api_server.go:239] stopped: https://192.168.50.63:8443/healthz: Get "https://192.168.50.63:8443/healthz": dial tcp 192.168.50.63:8443: connect: connection refused
	I0526 21:53:28.391798  558359 api_server.go:223] Checking apiserver healthz at https://192.168.50.63:8443/healthz ...
	I0526 21:53:28.392347  558359 api_server.go:239] stopped: https://192.168.50.63:8443/healthz: Get "https://192.168.50.63:8443/healthz": dial tcp 192.168.50.63:8443: connect: connection refused
	I0526 21:53:28.891081  558359 api_server.go:223] Checking apiserver healthz at https://192.168.50.63:8443/healthz ...
	I0526 21:53:28.891806  558359 api_server.go:239] stopped: https://192.168.50.63:8443/healthz: Get "https://192.168.50.63:8443/healthz": dial tcp 192.168.50.63:8443: connect: connection refused
	I0526 21:53:29.391612  558359 api_server.go:223] Checking apiserver healthz at https://192.168.50.63:8443/healthz ...
	I0526 21:53:29.392223  558359 api_server.go:239] stopped: https://192.168.50.63:8443/healthz: Get "https://192.168.50.63:8443/healthz": dial tcp 192.168.50.63:8443: connect: connection refused
	I0526 21:53:29.891844  558359 api_server.go:223] Checking apiserver healthz at https://192.168.50.63:8443/healthz ...
	I0526 21:53:32.219365  558359 api_server.go:239] stopped: https://192.168.50.63:8443/healthz: Get "https://192.168.50.63:8443/healthz": dial tcp 192.168.50.63:8443: connect: connection refused
	I0526 21:53:32.396940  558359 api_server.go:223] Checking apiserver healthz at https://192.168.50.63:8443/healthz ...
	I0526 21:53:32.400958  558359 api_server.go:239] stopped: https://192.168.50.63:8443/healthz: Get "https://192.168.50.63:8443/healthz": dial tcp 192.168.50.63:8443: connect: connection refused
	I0526 21:53:32.891727  558359 api_server.go:223] Checking apiserver healthz at https://192.168.50.63:8443/healthz ...
	I0526 21:53:32.892281  558359 api_server.go:239] stopped: https://192.168.50.63:8443/healthz: Get "https://192.168.50.63:8443/healthz": dial tcp 192.168.50.63:8443: connect: connection refused
	I0526 21:53:33.391004  558359 api_server.go:223] Checking apiserver healthz at https://192.168.50.63:8443/healthz ...
	I0526 21:53:33.391542  558359 api_server.go:239] stopped: https://192.168.50.63:8443/healthz: Get "https://192.168.50.63:8443/healthz": dial tcp 192.168.50.63:8443: connect: connection refused
	I0526 21:53:33.891075  558359 api_server.go:223] Checking apiserver healthz at https://192.168.50.63:8443/healthz ...
	I0526 21:53:33.891708  558359 api_server.go:239] stopped: https://192.168.50.63:8443/healthz: Get "https://192.168.50.63:8443/healthz": dial tcp 192.168.50.63:8443: connect: connection refused
	I0526 21:53:34.391494  558359 api_server.go:223] Checking apiserver healthz at https://192.168.50.63:8443/healthz ...
	I0526 21:53:34.392206  558359 api_server.go:239] stopped: https://192.168.50.63:8443/healthz: Get "https://192.168.50.63:8443/healthz": dial tcp 192.168.50.63:8443: connect: connection refused
	I0526 21:53:34.891049  558359 api_server.go:223] Checking apiserver healthz at https://192.168.50.63:8443/healthz ...
	I0526 21:53:34.891699  558359 api_server.go:239] stopped: https://192.168.50.63:8443/healthz: Get "https://192.168.50.63:8443/healthz": dial tcp 192.168.50.63:8443: connect: connection refused
	I0526 21:53:35.391796  558359 api_server.go:223] Checking apiserver healthz at https://192.168.50.63:8443/healthz ...
	I0526 21:53:35.395685  558359 api_server.go:239] stopped: https://192.168.50.63:8443/healthz: Get "https://192.168.50.63:8443/healthz": dial tcp 192.168.50.63:8443: connect: connection refused
	I0526 21:53:35.891890  558359 api_server.go:223] Checking apiserver healthz at https://192.168.50.63:8443/healthz ...
	I0526 21:53:35.895698  558359 api_server.go:239] stopped: https://192.168.50.63:8443/healthz: Get "https://192.168.50.63:8443/healthz": dial tcp 192.168.50.63:8443: connect: connection refused
	I0526 21:53:36.392453  558359 api_server.go:223] Checking apiserver healthz at https://192.168.50.63:8443/healthz ...
	I0526 21:53:36.397243  558359 api_server.go:239] stopped: https://192.168.50.63:8443/healthz: Get "https://192.168.50.63:8443/healthz": dial tcp 192.168.50.63:8443: connect: connection refused
	I0526 21:53:36.891584  558359 api_server.go:223] Checking apiserver healthz at https://192.168.50.63:8443/healthz ...
	I0526 21:53:36.892390  558359 api_server.go:239] stopped: https://192.168.50.63:8443/healthz: Get "https://192.168.50.63:8443/healthz": dial tcp 192.168.50.63:8443: connect: connection refused
	I0526 21:53:37.391067  558359 api_server.go:223] Checking apiserver healthz at https://192.168.50.63:8443/healthz ...
	I0526 21:53:37.391814  558359 api_server.go:239] stopped: https://192.168.50.63:8443/healthz: Get "https://192.168.50.63:8443/healthz": dial tcp 192.168.50.63:8443: connect: connection refused
	I0526 21:53:37.891576  558359 api_server.go:223] Checking apiserver healthz at https://192.168.50.63:8443/healthz ...
	I0526 21:53:37.893333  558359 api_server.go:239] stopped: https://192.168.50.63:8443/healthz: Get "https://192.168.50.63:8443/healthz": dial tcp 192.168.50.63:8443: connect: connection refused
	I0526 21:53:38.396127  558359 api_server.go:223] Checking apiserver healthz at https://192.168.50.63:8443/healthz ...
	I0526 21:53:38.396838  558359 api_server.go:239] stopped: https://192.168.50.63:8443/healthz: Get "https://192.168.50.63:8443/healthz": dial tcp 192.168.50.63:8443: connect: connection refused
	I0526 21:53:38.891585  558359 api_server.go:223] Checking apiserver healthz at https://192.168.50.63:8443/healthz ...
	I0526 21:53:38.892219  558359 api_server.go:239] stopped: https://192.168.50.63:8443/healthz: Get "https://192.168.50.63:8443/healthz": dial tcp 192.168.50.63:8443: connect: connection refused
	I0526 21:53:39.392038  558359 api_server.go:223] Checking apiserver healthz at https://192.168.50.63:8443/healthz ...
	I0526 21:53:39.392818  558359 api_server.go:239] stopped: https://192.168.50.63:8443/healthz: Get "https://192.168.50.63:8443/healthz": dial tcp 192.168.50.63:8443: connect: connection refused
	I0526 21:53:39.891961  558359 api_server.go:223] Checking apiserver healthz at https://192.168.50.63:8443/healthz ...
	I0526 21:53:39.892642  558359 api_server.go:239] stopped: https://192.168.50.63:8443/healthz: Get "https://192.168.50.63:8443/healthz": dial tcp 192.168.50.63:8443: connect: connection refused
	I0526 21:53:40.391837  558359 api_server.go:223] Checking apiserver healthz at https://192.168.50.63:8443/healthz ...
	I0526 21:53:40.392524  558359 api_server.go:239] stopped: https://192.168.50.63:8443/healthz: Get "https://192.168.50.63:8443/healthz": dial tcp 192.168.50.63:8443: connect: connection refused
	I0526 21:53:40.891468  558359 api_server.go:223] Checking apiserver healthz at https://192.168.50.63:8443/healthz ...
	I0526 21:53:40.896869  558359 api_server.go:239] stopped: https://192.168.50.63:8443/healthz: Get "https://192.168.50.63:8443/healthz": dial tcp 192.168.50.63:8443: connect: connection refused
	I0526 21:53:41.391554  558359 api_server.go:223] Checking apiserver healthz at https://192.168.50.63:8443/healthz ...
	I0526 21:53:41.392322  558359 api_server.go:239] stopped: https://192.168.50.63:8443/healthz: Get "https://192.168.50.63:8443/healthz": dial tcp 192.168.50.63:8443: connect: connection refused
	I0526 21:53:41.892986  558359 api_server.go:223] Checking apiserver healthz at https://192.168.50.63:8443/healthz ...
	I0526 21:53:41.893600  558359 api_server.go:239] stopped: https://192.168.50.63:8443/healthz: Get "https://192.168.50.63:8443/healthz": dial tcp 192.168.50.63:8443: connect: connection refused
	I0526 21:53:42.391154  558359 api_server.go:223] Checking apiserver healthz at https://192.168.50.63:8443/healthz ...
	I0526 21:53:42.391802  558359 api_server.go:239] stopped: https://192.168.50.63:8443/healthz: Get "https://192.168.50.63:8443/healthz": dial tcp 192.168.50.63:8443: connect: connection refused
	I0526 21:53:42.891551  558359 api_server.go:223] Checking apiserver healthz at https://192.168.50.63:8443/healthz ...
	I0526 21:53:42.892162  558359 api_server.go:239] stopped: https://192.168.50.63:8443/healthz: Get "https://192.168.50.63:8443/healthz": dial tcp 192.168.50.63:8443: connect: connection refused
	I0526 21:53:43.392034  558359 api_server.go:223] Checking apiserver healthz at https://192.168.50.63:8443/healthz ...
	I0526 21:54:03.246718  558359 api_server.go:239] stopped: https://192.168.50.63:8443/healthz: Get "https://192.168.50.63:8443/healthz": read tcp 192.168.50.1:58448->192.168.50.63:8443: read: connection reset by peer
	I0526 21:54:03.392031  558359 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0526 21:54:03.392108  558359 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0526 21:54:03.435241  558359 cri.go:76] found id: "59ce4be66acf4f592a24f2a82021b6d8b6aed6a9f939eae304a0e66c21e58d5d"
	I0526 21:54:03.435273  558359 cri.go:76] found id: "a025d96a46b73126b74cac3a0707afb542ba3263e5a8cd37cbe3eb8299354731"
	I0526 21:54:03.435280  558359 cri.go:76] found id: ""
	I0526 21:54:03.435288  558359 logs.go:270] 2 containers: [59ce4be66acf4f592a24f2a82021b6d8b6aed6a9f939eae304a0e66c21e58d5d a025d96a46b73126b74cac3a0707afb542ba3263e5a8cd37cbe3eb8299354731]
	I0526 21:54:03.435350  558359 ssh_runner.go:149] Run: which crictl
	I0526 21:54:03.442793  558359 ssh_runner.go:149] Run: which crictl
	I0526 21:54:03.449624  558359 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0526 21:54:03.449693  558359 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=etcd
	I0526 21:54:03.479097  558359 cri.go:76] found id: "0dd6df36a3eee7921165299f6e31311d260b52d58be6d96b91e3d1899e4035ec"
	I0526 21:54:03.479126  558359 cri.go:76] found id: ""
	I0526 21:54:03.479137  558359 logs.go:270] 1 containers: [0dd6df36a3eee7921165299f6e31311d260b52d58be6d96b91e3d1899e4035ec]
	I0526 21:54:03.479198  558359 ssh_runner.go:149] Run: which crictl
	I0526 21:54:03.484744  558359 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0526 21:54:03.484804  558359 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=coredns
	I0526 21:54:03.512956  558359 cri.go:76] found id: ""
	I0526 21:54:03.512981  558359 logs.go:270] 0 containers: []
	W0526 21:54:03.512987  558359 logs.go:272] No container was found matching "coredns"
	I0526 21:54:03.512992  558359 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0526 21:54:03.513033  558359 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0526 21:54:03.543922  558359 cri.go:76] found id: "2aa93735f456937dd6bc96ddfe4320c6be1c44f039abe2a485527ad8b14ace67"
	I0526 21:54:03.543954  558359 cri.go:76] found id: ""
	I0526 21:54:03.543961  558359 logs.go:270] 1 containers: [2aa93735f456937dd6bc96ddfe4320c6be1c44f039abe2a485527ad8b14ace67]
	I0526 21:54:03.544015  558359 ssh_runner.go:149] Run: which crictl
	I0526 21:54:03.550874  558359 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0526 21:54:03.550935  558359 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0526 21:54:03.584699  558359 cri.go:76] found id: ""
	I0526 21:54:03.584741  558359 logs.go:270] 0 containers: []
	W0526 21:54:03.584751  558359 logs.go:272] No container was found matching "kube-proxy"
	I0526 21:54:03.584760  558359 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0526 21:54:03.584821  558359 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0526 21:54:03.607048  558359 cri.go:76] found id: ""
	I0526 21:54:03.607075  558359 logs.go:270] 0 containers: []
	W0526 21:54:03.607082  558359 logs.go:272] No container was found matching "kubernetes-dashboard"
	I0526 21:54:03.607096  558359 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0526 21:54:03.607153  558359 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0526 21:54:03.633955  558359 cri.go:76] found id: ""
	I0526 21:54:03.633976  558359 logs.go:270] 0 containers: []
	W0526 21:54:03.633981  558359 logs.go:272] No container was found matching "storage-provisioner"
	I0526 21:54:03.633988  558359 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0526 21:54:03.634044  558359 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0526 21:54:03.657867  558359 cri.go:76] found id: "a15a1724ee7df22edc8ef24287d9832227b7ea84fa38b502218aca961700fd1f"
	I0526 21:54:03.657890  558359 cri.go:76] found id: ""
	I0526 21:54:03.657898  558359 logs.go:270] 1 containers: [a15a1724ee7df22edc8ef24287d9832227b7ea84fa38b502218aca961700fd1f]
	I0526 21:54:03.657945  558359 ssh_runner.go:149] Run: which crictl
	I0526 21:54:03.665097  558359 logs.go:123] Gathering logs for containerd ...
	I0526 21:54:03.665118  558359 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0526 21:54:03.745335  558359 logs.go:123] Gathering logs for container status ...
	I0526 21:54:03.745373  558359 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0526 21:54:03.787693  558359 logs.go:123] Gathering logs for kubelet ...
	I0526 21:54:03.787721  558359 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0526 21:54:03.869117  558359 logs.go:123] Gathering logs for dmesg ...
	I0526 21:54:03.869158  558359 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0526 21:54:03.902099  558359 logs.go:123] Gathering logs for kube-apiserver [59ce4be66acf4f592a24f2a82021b6d8b6aed6a9f939eae304a0e66c21e58d5d] ...
	I0526 21:54:03.902140  558359 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 59ce4be66acf4f592a24f2a82021b6d8b6aed6a9f939eae304a0e66c21e58d5d"
	I0526 21:54:03.940355  558359 logs.go:123] Gathering logs for kube-scheduler [2aa93735f456937dd6bc96ddfe4320c6be1c44f039abe2a485527ad8b14ace67] ...
	I0526 21:54:03.940393  558359 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 2aa93735f456937dd6bc96ddfe4320c6be1c44f039abe2a485527ad8b14ace67"
	I0526 21:54:03.983133  558359 logs.go:123] Gathering logs for kube-controller-manager [a15a1724ee7df22edc8ef24287d9832227b7ea84fa38b502218aca961700fd1f] ...
	I0526 21:54:03.983167  558359 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 a15a1724ee7df22edc8ef24287d9832227b7ea84fa38b502218aca961700fd1f"
	I0526 21:54:04.040593  558359 logs.go:123] Gathering logs for describe nodes ...
	I0526 21:54:04.040629  558359 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0526 21:54:04.174079  558359 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0526 21:54:04.174101  558359 logs.go:123] Gathering logs for kube-apiserver [a025d96a46b73126b74cac3a0707afb542ba3263e5a8cd37cbe3eb8299354731] ...
	I0526 21:54:04.174115  558359 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 a025d96a46b73126b74cac3a0707afb542ba3263e5a8cd37cbe3eb8299354731"
	W0526 21:54:04.199789  558359 logs.go:130] failed kube-apiserver [a025d96a46b73126b74cac3a0707afb542ba3263e5a8cd37cbe3eb8299354731]: command: /bin/bash -c "sudo /bin/crictl logs --tail 400 a025d96a46b73126b74cac3a0707afb542ba3263e5a8cd37cbe3eb8299354731" /bin/bash -c "sudo /bin/crictl logs --tail 400 a025d96a46b73126b74cac3a0707afb542ba3263e5a8cd37cbe3eb8299354731": Process exited with status 1
	stdout:
	
	stderr:
	E0526 21:54:04.198776    5978 remote_runtime.go:295] ContainerStatus "a025d96a46b73126b74cac3a0707afb542ba3263e5a8cd37cbe3eb8299354731" from runtime service failed: rpc error: code = Unknown desc = an error occurred when try to find container "a025d96a46b73126b74cac3a0707afb542ba3263e5a8cd37cbe3eb8299354731": does not exist
	time="2021-05-26T21:54:04Z" level=fatal msg="rpc error: code = Unknown desc = an error occurred when try to find container \"a025d96a46b73126b74cac3a0707afb542ba3263e5a8cd37cbe3eb8299354731\": does not exist"
	 output: 
	** stderr ** 
	E0526 21:54:04.198776    5978 remote_runtime.go:295] ContainerStatus "a025d96a46b73126b74cac3a0707afb542ba3263e5a8cd37cbe3eb8299354731" from runtime service failed: rpc error: code = Unknown desc = an error occurred when try to find container "a025d96a46b73126b74cac3a0707afb542ba3263e5a8cd37cbe3eb8299354731": does not exist
	time="2021-05-26T21:54:04Z" level=fatal msg="rpc error: code = Unknown desc = an error occurred when try to find container \"a025d96a46b73126b74cac3a0707afb542ba3263e5a8cd37cbe3eb8299354731\": does not exist"
	
	** /stderr **
	I0526 21:54:04.199833  558359 logs.go:123] Gathering logs for etcd [0dd6df36a3eee7921165299f6e31311d260b52d58be6d96b91e3d1899e4035ec] ...
	I0526 21:54:04.199853  558359 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 0dd6df36a3eee7921165299f6e31311d260b52d58be6d96b91e3d1899e4035ec"
	I0526 21:54:06.748873  558359 api_server.go:223] Checking apiserver healthz at https://192.168.50.63:8443/healthz ...
	I0526 21:54:06.749808  558359 api_server.go:239] stopped: https://192.168.50.63:8443/healthz: Get "https://192.168.50.63:8443/healthz": dial tcp 192.168.50.63:8443: connect: connection refused
	I0526 21:54:06.891067  558359 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0526 21:54:06.891157  558359 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0526 21:54:06.910195  558359 cri.go:76] found id: "d7232792a8530d56c9efd8bf295f3a0bcd31bccd9c35d37030bdef59f0d79e5e"
	I0526 21:54:06.910231  558359 cri.go:76] found id: "59ce4be66acf4f592a24f2a82021b6d8b6aed6a9f939eae304a0e66c21e58d5d"
	I0526 21:54:06.910242  558359 cri.go:76] found id: ""
	I0526 21:54:06.910253  558359 logs.go:270] 2 containers: [d7232792a8530d56c9efd8bf295f3a0bcd31bccd9c35d37030bdef59f0d79e5e 59ce4be66acf4f592a24f2a82021b6d8b6aed6a9f939eae304a0e66c21e58d5d]
	I0526 21:54:06.910329  558359 ssh_runner.go:149] Run: which crictl
	I0526 21:54:06.915857  558359 ssh_runner.go:149] Run: which crictl
	I0526 21:54:06.925389  558359 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0526 21:54:06.925452  558359 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=etcd
	I0526 21:54:06.945911  558359 cri.go:76] found id: "0dd6df36a3eee7921165299f6e31311d260b52d58be6d96b91e3d1899e4035ec"
	I0526 21:54:06.945937  558359 cri.go:76] found id: ""
	I0526 21:54:06.945945  558359 logs.go:270] 1 containers: [0dd6df36a3eee7921165299f6e31311d260b52d58be6d96b91e3d1899e4035ec]
	I0526 21:54:06.945993  558359 ssh_runner.go:149] Run: which crictl
	I0526 21:54:06.952780  558359 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0526 21:54:06.952846  558359 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=coredns
	I0526 21:54:06.988994  558359 cri.go:76] found id: ""
	I0526 21:54:06.989023  558359 logs.go:270] 0 containers: []
	W0526 21:54:06.989033  558359 logs.go:272] No container was found matching "coredns"
	I0526 21:54:06.989042  558359 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0526 21:54:06.989105  558359 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0526 21:54:07.013240  558359 cri.go:76] found id: "2aa93735f456937dd6bc96ddfe4320c6be1c44f039abe2a485527ad8b14ace67"
	I0526 21:54:07.013271  558359 cri.go:76] found id: ""
	I0526 21:54:07.013280  558359 logs.go:270] 1 containers: [2aa93735f456937dd6bc96ddfe4320c6be1c44f039abe2a485527ad8b14ace67]
	I0526 21:54:07.013340  558359 ssh_runner.go:149] Run: which crictl
	I0526 21:54:07.020802  558359 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0526 21:54:07.020905  558359 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0526 21:54:07.053130  558359 cri.go:76] found id: ""
	I0526 21:54:07.053158  558359 logs.go:270] 0 containers: []
	W0526 21:54:07.053166  558359 logs.go:272] No container was found matching "kube-proxy"
	I0526 21:54:07.053174  558359 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0526 21:54:07.053237  558359 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0526 21:54:07.101664  558359 cri.go:76] found id: ""
	I0526 21:54:07.101698  558359 logs.go:270] 0 containers: []
	W0526 21:54:07.101708  558359 logs.go:272] No container was found matching "kubernetes-dashboard"
	I0526 21:54:07.101717  558359 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0526 21:54:07.101784  558359 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0526 21:54:07.125755  558359 cri.go:76] found id: ""
	I0526 21:54:07.125777  558359 logs.go:270] 0 containers: []
	W0526 21:54:07.125785  558359 logs.go:272] No container was found matching "storage-provisioner"
	I0526 21:54:07.125793  558359 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0526 21:54:07.125854  558359 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0526 21:54:07.150896  558359 cri.go:76] found id: "a15a1724ee7df22edc8ef24287d9832227b7ea84fa38b502218aca961700fd1f"
	I0526 21:54:07.150918  558359 cri.go:76] found id: ""
	I0526 21:54:07.150927  558359 logs.go:270] 1 containers: [a15a1724ee7df22edc8ef24287d9832227b7ea84fa38b502218aca961700fd1f]
	I0526 21:54:07.150985  558359 ssh_runner.go:149] Run: which crictl
	I0526 21:54:07.156895  558359 logs.go:123] Gathering logs for kube-controller-manager [a15a1724ee7df22edc8ef24287d9832227b7ea84fa38b502218aca961700fd1f] ...
	I0526 21:54:07.156915  558359 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 a15a1724ee7df22edc8ef24287d9832227b7ea84fa38b502218aca961700fd1f"
	I0526 21:54:07.200288  558359 logs.go:123] Gathering logs for containerd ...
	I0526 21:54:07.200321  558359 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0526 21:54:07.288965  558359 logs.go:123] Gathering logs for container status ...
	I0526 21:54:07.289004  558359 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0526 21:54:07.329635  558359 logs.go:123] Gathering logs for kubelet ...
	I0526 21:54:07.329673  558359 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0526 21:54:07.423668  558359 logs.go:138] Found kubelet problem: May 26 21:54:04 running-upgrade-20210526215018-510955 kubelet[5228]: E0526 21:54:04.135980    5228 pod_workers.go:191] Error syncing pod 52d3bd95fc93c917a5a0edc6e16385fc ("kube-apiserver-running-upgrade-20210526215018-510955_kube-system(52d3bd95fc93c917a5a0edc6e16385fc)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-running-upgrade-20210526215018-510955_kube-system(52d3bd95fc93c917a5a0edc6e16385fc)"
	I0526 21:54:07.449533  558359 logs.go:123] Gathering logs for etcd [0dd6df36a3eee7921165299f6e31311d260b52d58be6d96b91e3d1899e4035ec] ...
	I0526 21:54:07.449572  558359 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 0dd6df36a3eee7921165299f6e31311d260b52d58be6d96b91e3d1899e4035ec"
	I0526 21:54:07.486995  558359 logs.go:123] Gathering logs for kube-apiserver [d7232792a8530d56c9efd8bf295f3a0bcd31bccd9c35d37030bdef59f0d79e5e] ...
	I0526 21:54:07.487086  558359 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 d7232792a8530d56c9efd8bf295f3a0bcd31bccd9c35d37030bdef59f0d79e5e"
	I0526 21:54:07.509670  558359 logs.go:123] Gathering logs for kube-apiserver [59ce4be66acf4f592a24f2a82021b6d8b6aed6a9f939eae304a0e66c21e58d5d] ...
	I0526 21:54:07.509701  558359 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 59ce4be66acf4f592a24f2a82021b6d8b6aed6a9f939eae304a0e66c21e58d5d"
	I0526 21:54:07.541472  558359 logs.go:123] Gathering logs for kube-scheduler [2aa93735f456937dd6bc96ddfe4320c6be1c44f039abe2a485527ad8b14ace67] ...
	I0526 21:54:07.541513  558359 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 2aa93735f456937dd6bc96ddfe4320c6be1c44f039abe2a485527ad8b14ace67"
	I0526 21:54:07.586719  558359 logs.go:123] Gathering logs for dmesg ...
	I0526 21:54:07.586750  558359 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0526 21:54:07.605723  558359 logs.go:123] Gathering logs for describe nodes ...
	I0526 21:54:07.605756  558359 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0526 21:54:29.181288  558359 ssh_runner.go:189] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (21.575493025s)
	W0526 21:54:29.181332  558359 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0526 21:54:29.181344  558359 out.go:304] Setting ErrFile to fd 2...
	I0526 21:54:29.181357  558359 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0526 21:54:29.181472  558359 out.go:235] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0526 21:54:29.181487  558359 out.go:235]   May 26 21:54:04 running-upgrade-20210526215018-510955 kubelet[5228]: E0526 21:54:04.135980    5228 pod_workers.go:191] Error syncing pod 52d3bd95fc93c917a5a0edc6e16385fc ("kube-apiserver-running-upgrade-20210526215018-510955_kube-system(52d3bd95fc93c917a5a0edc6e16385fc)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-running-upgrade-20210526215018-510955_kube-system(52d3bd95fc93c917a5a0edc6e16385fc)"
	  May 26 21:54:04 running-upgrade-20210526215018-510955 kubelet[5228]: E0526 21:54:04.135980    5228 pod_workers.go:191] Error syncing pod 52d3bd95fc93c917a5a0edc6e16385fc ("kube-apiserver-running-upgrade-20210526215018-510955_kube-system(52d3bd95fc93c917a5a0edc6e16385fc)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-running-upgrade-20210526215018-510955_kube-system(52d3bd95fc93c917a5a0edc6e16385fc)"
	I0526 21:54:29.181494  558359 out.go:304] Setting ErrFile to fd 2...
	I0526 21:54:29.181499  558359 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0526 21:54:39.183041  558359 api_server.go:223] Checking apiserver healthz at https://192.168.50.63:8443/healthz ...
	I0526 21:54:59.339017  558359 api_server.go:239] stopped: https://192.168.50.63:8443/healthz: Get "https://192.168.50.63:8443/healthz": read tcp 192.168.50.1:58514->192.168.50.63:8443: read: connection reset by peer
	I0526 21:54:59.391774  558359 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0526 21:54:59.391870  558359 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0526 21:54:59.409580  558359 cri.go:76] found id: "c34960d806915476e1bb78b2a5607a38effe38c948cee59060881b1cc268076d"
	I0526 21:54:59.409612  558359 cri.go:76] found id: "d7232792a8530d56c9efd8bf295f3a0bcd31bccd9c35d37030bdef59f0d79e5e"
	I0526 21:54:59.409619  558359 cri.go:76] found id: ""
	I0526 21:54:59.409629  558359 logs.go:270] 2 containers: [c34960d806915476e1bb78b2a5607a38effe38c948cee59060881b1cc268076d d7232792a8530d56c9efd8bf295f3a0bcd31bccd9c35d37030bdef59f0d79e5e]
	I0526 21:54:59.409695  558359 ssh_runner.go:149] Run: which crictl
	I0526 21:54:59.420685  558359 ssh_runner.go:149] Run: which crictl
	I0526 21:54:59.429597  558359 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0526 21:54:59.429680  558359 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=etcd
	I0526 21:54:59.449911  558359 cri.go:76] found id: "0dd6df36a3eee7921165299f6e31311d260b52d58be6d96b91e3d1899e4035ec"
	I0526 21:54:59.449932  558359 cri.go:76] found id: ""
	I0526 21:54:59.449939  558359 logs.go:270] 1 containers: [0dd6df36a3eee7921165299f6e31311d260b52d58be6d96b91e3d1899e4035ec]
	I0526 21:54:59.449986  558359 ssh_runner.go:149] Run: which crictl
	I0526 21:54:59.458422  558359 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0526 21:54:59.458481  558359 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=coredns
	I0526 21:54:59.476561  558359 cri.go:76] found id: ""
	I0526 21:54:59.476580  558359 logs.go:270] 0 containers: []
	W0526 21:54:59.476589  558359 logs.go:272] No container was found matching "coredns"
	I0526 21:54:59.476596  558359 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0526 21:54:59.476647  558359 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0526 21:54:59.496235  558359 cri.go:76] found id: "2aa93735f456937dd6bc96ddfe4320c6be1c44f039abe2a485527ad8b14ace67"
	I0526 21:54:59.496254  558359 cri.go:76] found id: ""
	I0526 21:54:59.496260  558359 logs.go:270] 1 containers: [2aa93735f456937dd6bc96ddfe4320c6be1c44f039abe2a485527ad8b14ace67]
	I0526 21:54:59.496301  558359 ssh_runner.go:149] Run: which crictl
	I0526 21:54:59.500416  558359 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0526 21:54:59.500482  558359 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0526 21:54:59.517582  558359 cri.go:76] found id: ""
	I0526 21:54:59.517605  558359 logs.go:270] 0 containers: []
	W0526 21:54:59.517613  558359 logs.go:272] No container was found matching "kube-proxy"
	I0526 21:54:59.517620  558359 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0526 21:54:59.517671  558359 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0526 21:54:59.534856  558359 cri.go:76] found id: ""
	I0526 21:54:59.534878  558359 logs.go:270] 0 containers: []
	W0526 21:54:59.534885  558359 logs.go:272] No container was found matching "kubernetes-dashboard"
	I0526 21:54:59.534895  558359 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0526 21:54:59.534944  558359 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0526 21:54:59.554707  558359 cri.go:76] found id: ""
	I0526 21:54:59.554730  558359 logs.go:270] 0 containers: []
	W0526 21:54:59.554736  558359 logs.go:272] No container was found matching "storage-provisioner"
	I0526 21:54:59.554742  558359 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0526 21:54:59.554792  558359 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0526 21:54:59.574858  558359 cri.go:76] found id: "a15a1724ee7df22edc8ef24287d9832227b7ea84fa38b502218aca961700fd1f"
	I0526 21:54:59.574882  558359 cri.go:76] found id: ""
	I0526 21:54:59.574891  558359 logs.go:270] 1 containers: [a15a1724ee7df22edc8ef24287d9832227b7ea84fa38b502218aca961700fd1f]
	I0526 21:54:59.574939  558359 ssh_runner.go:149] Run: which crictl
	I0526 21:54:59.578997  558359 logs.go:123] Gathering logs for kube-apiserver [c34960d806915476e1bb78b2a5607a38effe38c948cee59060881b1cc268076d] ...
	I0526 21:54:59.579015  558359 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 c34960d806915476e1bb78b2a5607a38effe38c948cee59060881b1cc268076d"
	I0526 21:54:59.600746  558359 logs.go:123] Gathering logs for kube-apiserver [d7232792a8530d56c9efd8bf295f3a0bcd31bccd9c35d37030bdef59f0d79e5e] ...
	I0526 21:54:59.600768  558359 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 d7232792a8530d56c9efd8bf295f3a0bcd31bccd9c35d37030bdef59f0d79e5e"
	I0526 21:54:59.622931  558359 logs.go:123] Gathering logs for kube-controller-manager [a15a1724ee7df22edc8ef24287d9832227b7ea84fa38b502218aca961700fd1f] ...
	I0526 21:54:59.622953  558359 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 a15a1724ee7df22edc8ef24287d9832227b7ea84fa38b502218aca961700fd1f"
	I0526 21:54:59.653644  558359 logs.go:123] Gathering logs for kubelet ...
	I0526 21:54:59.653672  558359 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0526 21:54:59.714308  558359 logs.go:123] Gathering logs for describe nodes ...
	I0526 21:54:59.714337  558359 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0526 21:54:59.792538  558359 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0526 21:54:59.792566  558359 logs.go:123] Gathering logs for etcd [0dd6df36a3eee7921165299f6e31311d260b52d58be6d96b91e3d1899e4035ec] ...
	I0526 21:54:59.792579  558359 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 0dd6df36a3eee7921165299f6e31311d260b52d58be6d96b91e3d1899e4035ec"
	I0526 21:54:59.817327  558359 logs.go:123] Gathering logs for kube-scheduler [2aa93735f456937dd6bc96ddfe4320c6be1c44f039abe2a485527ad8b14ace67] ...
	I0526 21:54:59.817351  558359 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 2aa93735f456937dd6bc96ddfe4320c6be1c44f039abe2a485527ad8b14ace67"
	I0526 21:54:59.849391  558359 logs.go:123] Gathering logs for containerd ...
	I0526 21:54:59.849427  558359 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0526 21:54:59.912708  558359 logs.go:123] Gathering logs for container status ...
	I0526 21:54:59.912734  558359 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0526 21:54:59.949323  558359 logs.go:123] Gathering logs for dmesg ...
	I0526 21:54:59.949349  558359 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0526 21:55:02.461428  558359 api_server.go:223] Checking apiserver healthz at https://192.168.50.63:8443/healthz ...
	I0526 21:55:02.462014  558359 api_server.go:239] stopped: https://192.168.50.63:8443/healthz: Get "https://192.168.50.63:8443/healthz": dial tcp 192.168.50.63:8443: connect: connection refused
	I0526 21:55:02.891414  558359 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0526 21:55:02.891543  558359 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0526 21:55:02.911456  558359 cri.go:76] found id: "c34960d806915476e1bb78b2a5607a38effe38c948cee59060881b1cc268076d"
	I0526 21:55:02.911477  558359 cri.go:76] found id: ""
	I0526 21:55:02.911484  558359 logs.go:270] 1 containers: [c34960d806915476e1bb78b2a5607a38effe38c948cee59060881b1cc268076d]
	I0526 21:55:02.911536  558359 ssh_runner.go:149] Run: which crictl
	I0526 21:55:02.915802  558359 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0526 21:55:02.915866  558359 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=etcd
	I0526 21:55:02.932266  558359 cri.go:76] found id: "0dd6df36a3eee7921165299f6e31311d260b52d58be6d96b91e3d1899e4035ec"
	I0526 21:55:02.932291  558359 cri.go:76] found id: ""
	I0526 21:55:02.932299  558359 logs.go:270] 1 containers: [0dd6df36a3eee7921165299f6e31311d260b52d58be6d96b91e3d1899e4035ec]
	I0526 21:55:02.932346  558359 ssh_runner.go:149] Run: which crictl
	I0526 21:55:02.936277  558359 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0526 21:55:02.936332  558359 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=coredns
	I0526 21:55:02.956284  558359 cri.go:76] found id: ""
	I0526 21:55:02.956299  558359 logs.go:270] 0 containers: []
	W0526 21:55:02.956304  558359 logs.go:272] No container was found matching "coredns"
	I0526 21:55:02.956309  558359 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0526 21:55:02.956347  558359 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0526 21:55:02.971914  558359 cri.go:76] found id: "2aa93735f456937dd6bc96ddfe4320c6be1c44f039abe2a485527ad8b14ace67"
	I0526 21:55:02.971934  558359 cri.go:76] found id: ""
	I0526 21:55:02.971942  558359 logs.go:270] 1 containers: [2aa93735f456937dd6bc96ddfe4320c6be1c44f039abe2a485527ad8b14ace67]
	I0526 21:55:02.971981  558359 ssh_runner.go:149] Run: which crictl
	I0526 21:55:02.975895  558359 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0526 21:55:02.975944  558359 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0526 21:55:02.997359  558359 cri.go:76] found id: ""
	I0526 21:55:02.997380  558359 logs.go:270] 0 containers: []
	W0526 21:55:02.997387  558359 logs.go:272] No container was found matching "kube-proxy"
	I0526 21:55:02.997393  558359 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0526 21:55:02.997436  558359 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0526 21:55:03.011118  558359 cri.go:76] found id: ""
	I0526 21:55:03.011138  558359 logs.go:270] 0 containers: []
	W0526 21:55:03.011146  558359 logs.go:272] No container was found matching "kubernetes-dashboard"
	I0526 21:55:03.011153  558359 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0526 21:55:03.011196  558359 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0526 21:55:03.027163  558359 cri.go:76] found id: ""
	I0526 21:55:03.027176  558359 logs.go:270] 0 containers: []
	W0526 21:55:03.027181  558359 logs.go:272] No container was found matching "storage-provisioner"
	I0526 21:55:03.027186  558359 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0526 21:55:03.027215  558359 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0526 21:55:03.047816  558359 cri.go:76] found id: "a15a1724ee7df22edc8ef24287d9832227b7ea84fa38b502218aca961700fd1f"
	I0526 21:55:03.047831  558359 cri.go:76] found id: ""
	I0526 21:55:03.047836  558359 logs.go:270] 1 containers: [a15a1724ee7df22edc8ef24287d9832227b7ea84fa38b502218aca961700fd1f]
	I0526 21:55:03.047867  558359 ssh_runner.go:149] Run: which crictl
	I0526 21:55:03.052086  558359 logs.go:123] Gathering logs for kubelet ...
	I0526 21:55:03.052104  558359 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0526 21:55:03.105456  558359 logs.go:138] Found kubelet problem: May 26 21:54:59 running-upgrade-20210526215018-510955 kubelet[6007]: E0526 21:54:59.993277    6007 pod_workers.go:191] Error syncing pod 52d3bd95fc93c917a5a0edc6e16385fc ("kube-apiserver-running-upgrade-20210526215018-510955_kube-system(52d3bd95fc93c917a5a0edc6e16385fc)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-running-upgrade-20210526215018-510955_kube-system(52d3bd95fc93c917a5a0edc6e16385fc)"
	I0526 21:55:03.112427  558359 logs.go:123] Gathering logs for describe nodes ...
	I0526 21:55:03.112449  558359 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0526 21:55:03.203372  558359 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0526 21:55:03.203398  558359 logs.go:123] Gathering logs for etcd [0dd6df36a3eee7921165299f6e31311d260b52d58be6d96b91e3d1899e4035ec] ...
	I0526 21:55:03.203413  558359 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 0dd6df36a3eee7921165299f6e31311d260b52d58be6d96b91e3d1899e4035ec"
	I0526 21:55:03.232701  558359 logs.go:123] Gathering logs for kube-scheduler [2aa93735f456937dd6bc96ddfe4320c6be1c44f039abe2a485527ad8b14ace67] ...
	I0526 21:55:03.232727  558359 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 2aa93735f456937dd6bc96ddfe4320c6be1c44f039abe2a485527ad8b14ace67"
	I0526 21:55:03.263870  558359 logs.go:123] Gathering logs for kube-controller-manager [a15a1724ee7df22edc8ef24287d9832227b7ea84fa38b502218aca961700fd1f] ...
	I0526 21:55:03.263895  558359 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 a15a1724ee7df22edc8ef24287d9832227b7ea84fa38b502218aca961700fd1f"
	I0526 21:55:03.295390  558359 logs.go:123] Gathering logs for containerd ...
	I0526 21:55:03.295415  558359 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0526 21:55:03.359384  558359 logs.go:123] Gathering logs for dmesg ...
	I0526 21:55:03.359410  558359 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0526 21:55:03.369666  558359 logs.go:123] Gathering logs for kube-apiserver [c34960d806915476e1bb78b2a5607a38effe38c948cee59060881b1cc268076d] ...
	I0526 21:55:03.369686  558359 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 c34960d806915476e1bb78b2a5607a38effe38c948cee59060881b1cc268076d"
	I0526 21:55:03.388776  558359 logs.go:123] Gathering logs for container status ...
	I0526 21:55:03.388795  558359 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0526 21:55:03.410175  558359 out.go:304] Setting ErrFile to fd 2...
	I0526 21:55:03.410193  558359 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0526 21:55:03.410306  558359 out.go:235] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0526 21:55:03.410320  558359 out.go:235]   May 26 21:54:59 running-upgrade-20210526215018-510955 kubelet[6007]: E0526 21:54:59.993277    6007 pod_workers.go:191] Error syncing pod 52d3bd95fc93c917a5a0edc6e16385fc ("kube-apiserver-running-upgrade-20210526215018-510955_kube-system(52d3bd95fc93c917a5a0edc6e16385fc)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-running-upgrade-20210526215018-510955_kube-system(52d3bd95fc93c917a5a0edc6e16385fc)"
	  May 26 21:54:59 running-upgrade-20210526215018-510955 kubelet[6007]: E0526 21:54:59.993277    6007 pod_workers.go:191] Error syncing pod 52d3bd95fc93c917a5a0edc6e16385fc ("kube-apiserver-running-upgrade-20210526215018-510955_kube-system(52d3bd95fc93c917a5a0edc6e16385fc)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-running-upgrade-20210526215018-510955_kube-system(52d3bd95fc93c917a5a0edc6e16385fc)"
	I0526 21:55:03.410327  558359 out.go:304] Setting ErrFile to fd 2...
	I0526 21:55:03.410334  558359 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0526 21:55:13.412220  558359 api_server.go:223] Checking apiserver healthz at https://192.168.50.63:8443/healthz ...
	I0526 21:55:13.413146  558359 api_server.go:239] stopped: https://192.168.50.63:8443/healthz: Get "https://192.168.50.63:8443/healthz": dial tcp 192.168.50.63:8443: connect: connection refused
	I0526 21:55:13.891779  558359 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0526 21:55:13.891867  558359 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0526 21:55:13.909937  558359 cri.go:76] found id: "c34960d806915476e1bb78b2a5607a38effe38c948cee59060881b1cc268076d"
	I0526 21:55:13.909960  558359 cri.go:76] found id: ""
	I0526 21:55:13.909968  558359 logs.go:270] 1 containers: [c34960d806915476e1bb78b2a5607a38effe38c948cee59060881b1cc268076d]
	I0526 21:55:13.910018  558359 ssh_runner.go:149] Run: which crictl
	I0526 21:55:13.914431  558359 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0526 21:55:13.914485  558359 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=etcd
	I0526 21:55:13.933719  558359 cri.go:76] found id: "0dd6df36a3eee7921165299f6e31311d260b52d58be6d96b91e3d1899e4035ec"
	I0526 21:55:13.933743  558359 cri.go:76] found id: ""
	I0526 21:55:13.933751  558359 logs.go:270] 1 containers: [0dd6df36a3eee7921165299f6e31311d260b52d58be6d96b91e3d1899e4035ec]
	I0526 21:55:13.933794  558359 ssh_runner.go:149] Run: which crictl
	I0526 21:55:13.937764  558359 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0526 21:55:13.937823  558359 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=coredns
	I0526 21:55:13.953999  558359 cri.go:76] found id: ""
	I0526 21:55:13.954019  558359 logs.go:270] 0 containers: []
	W0526 21:55:13.954025  558359 logs.go:272] No container was found matching "coredns"
	I0526 21:55:13.954032  558359 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0526 21:55:13.954076  558359 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0526 21:55:13.969958  558359 cri.go:76] found id: "2aa93735f456937dd6bc96ddfe4320c6be1c44f039abe2a485527ad8b14ace67"
	I0526 21:55:13.969977  558359 cri.go:76] found id: ""
	I0526 21:55:13.969984  558359 logs.go:270] 1 containers: [2aa93735f456937dd6bc96ddfe4320c6be1c44f039abe2a485527ad8b14ace67]
	I0526 21:55:13.970025  558359 ssh_runner.go:149] Run: which crictl
	I0526 21:55:13.973880  558359 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0526 21:55:13.973934  558359 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0526 21:55:13.989550  558359 cri.go:76] found id: ""
	I0526 21:55:13.989569  558359 logs.go:270] 0 containers: []
	W0526 21:55:13.989574  558359 logs.go:272] No container was found matching "kube-proxy"
	I0526 21:55:13.989579  558359 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0526 21:55:13.989627  558359 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0526 21:55:14.004047  558359 cri.go:76] found id: ""
	I0526 21:55:14.004068  558359 logs.go:270] 0 containers: []
	W0526 21:55:14.004075  558359 logs.go:272] No container was found matching "kubernetes-dashboard"
	I0526 21:55:14.004084  558359 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0526 21:55:14.004127  558359 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0526 21:55:14.021242  558359 cri.go:76] found id: ""
	I0526 21:55:14.021264  558359 logs.go:270] 0 containers: []
	W0526 21:55:14.021272  558359 logs.go:272] No container was found matching "storage-provisioner"
	I0526 21:55:14.021279  558359 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0526 21:55:14.021327  558359 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0526 21:55:14.034427  558359 cri.go:76] found id: "a15a1724ee7df22edc8ef24287d9832227b7ea84fa38b502218aca961700fd1f"
	I0526 21:55:14.034459  558359 cri.go:76] found id: ""
	I0526 21:55:14.034466  558359 logs.go:270] 1 containers: [a15a1724ee7df22edc8ef24287d9832227b7ea84fa38b502218aca961700fd1f]
	I0526 21:55:14.034503  558359 ssh_runner.go:149] Run: which crictl
	I0526 21:55:14.038412  558359 logs.go:123] Gathering logs for etcd [0dd6df36a3eee7921165299f6e31311d260b52d58be6d96b91e3d1899e4035ec] ...
	I0526 21:55:14.038433  558359 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 0dd6df36a3eee7921165299f6e31311d260b52d58be6d96b91e3d1899e4035ec"
	I0526 21:55:14.066565  558359 logs.go:123] Gathering logs for kube-scheduler [2aa93735f456937dd6bc96ddfe4320c6be1c44f039abe2a485527ad8b14ace67] ...
	I0526 21:55:14.066596  558359 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 2aa93735f456937dd6bc96ddfe4320c6be1c44f039abe2a485527ad8b14ace67"
	I0526 21:55:14.095843  558359 logs.go:123] Gathering logs for kube-controller-manager [a15a1724ee7df22edc8ef24287d9832227b7ea84fa38b502218aca961700fd1f] ...
	I0526 21:55:14.095868  558359 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 a15a1724ee7df22edc8ef24287d9832227b7ea84fa38b502218aca961700fd1f"
	I0526 21:55:14.132888  558359 logs.go:123] Gathering logs for dmesg ...
	I0526 21:55:14.132915  558359 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0526 21:55:14.143727  558359 logs.go:123] Gathering logs for describe nodes ...
	I0526 21:55:14.143753  558359 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0526 21:55:14.218054  558359 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0526 21:55:14.218076  558359 logs.go:123] Gathering logs for kube-apiserver [c34960d806915476e1bb78b2a5607a38effe38c948cee59060881b1cc268076d] ...
	I0526 21:55:14.218087  558359 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 c34960d806915476e1bb78b2a5607a38effe38c948cee59060881b1cc268076d"
	I0526 21:55:14.236897  558359 logs.go:123] Gathering logs for containerd ...
	I0526 21:55:14.236919  558359 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0526 21:55:14.303428  558359 logs.go:123] Gathering logs for container status ...
	I0526 21:55:14.303459  558359 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0526 21:55:14.328361  558359 logs.go:123] Gathering logs for kubelet ...
	I0526 21:55:14.328382  558359 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0526 21:55:14.367968  558359 logs.go:138] Found kubelet problem: May 26 21:54:59 running-upgrade-20210526215018-510955 kubelet[6007]: E0526 21:54:59.993277    6007 pod_workers.go:191] Error syncing pod 52d3bd95fc93c917a5a0edc6e16385fc ("kube-apiserver-running-upgrade-20210526215018-510955_kube-system(52d3bd95fc93c917a5a0edc6e16385fc)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-running-upgrade-20210526215018-510955_kube-system(52d3bd95fc93c917a5a0edc6e16385fc)"
	W0526 21:55:14.385361  558359 logs.go:138] Found kubelet problem: May 26 21:55:08 running-upgrade-20210526215018-510955 kubelet[6007]: E0526 21:55:08.367131    6007 pod_workers.go:191] Error syncing pod 52d3bd95fc93c917a5a0edc6e16385fc ("kube-apiserver-running-upgrade-20210526215018-510955_kube-system(52d3bd95fc93c917a5a0edc6e16385fc)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-running-upgrade-20210526215018-510955_kube-system(52d3bd95fc93c917a5a0edc6e16385fc)"
	I0526 21:55:14.397420  558359 out.go:304] Setting ErrFile to fd 2...
	I0526 21:55:14.397441  558359 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0526 21:55:14.397562  558359 out.go:235] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0526 21:55:14.397576  558359 out.go:235]   May 26 21:54:59 running-upgrade-20210526215018-510955 kubelet[6007]: E0526 21:54:59.993277    6007 pod_workers.go:191] Error syncing pod 52d3bd95fc93c917a5a0edc6e16385fc ("kube-apiserver-running-upgrade-20210526215018-510955_kube-system(52d3bd95fc93c917a5a0edc6e16385fc)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-running-upgrade-20210526215018-510955_kube-system(52d3bd95fc93c917a5a0edc6e16385fc)"
	  May 26 21:54:59 running-upgrade-20210526215018-510955 kubelet[6007]: E0526 21:54:59.993277    6007 pod_workers.go:191] Error syncing pod 52d3bd95fc93c917a5a0edc6e16385fc ("kube-apiserver-running-upgrade-20210526215018-510955_kube-system(52d3bd95fc93c917a5a0edc6e16385fc)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-running-upgrade-20210526215018-510955_kube-system(52d3bd95fc93c917a5a0edc6e16385fc)"
	W0526 21:55:14.397584  558359 out.go:235]   May 26 21:55:08 running-upgrade-20210526215018-510955 kubelet[6007]: E0526 21:55:08.367131    6007 pod_workers.go:191] Error syncing pod 52d3bd95fc93c917a5a0edc6e16385fc ("kube-apiserver-running-upgrade-20210526215018-510955_kube-system(52d3bd95fc93c917a5a0edc6e16385fc)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-running-upgrade-20210526215018-510955_kube-system(52d3bd95fc93c917a5a0edc6e16385fc)"
	  May 26 21:55:08 running-upgrade-20210526215018-510955 kubelet[6007]: E0526 21:55:08.367131    6007 pod_workers.go:191] Error syncing pod 52d3bd95fc93c917a5a0edc6e16385fc ("kube-apiserver-running-upgrade-20210526215018-510955_kube-system(52d3bd95fc93c917a5a0edc6e16385fc)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-running-upgrade-20210526215018-510955_kube-system(52d3bd95fc93c917a5a0edc6e16385fc)"
	I0526 21:55:14.397589  558359 out.go:304] Setting ErrFile to fd 2...
	I0526 21:55:14.397594  558359 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0526 21:55:24.399259  558359 api_server.go:223] Checking apiserver healthz at https://192.168.50.63:8443/healthz ...
	I0526 21:55:40.567065  558359 api_server.go:239] stopped: https://192.168.50.63:8443/healthz: Get "https://192.168.50.63:8443/healthz": read tcp 192.168.50.1:58552->192.168.50.63:8443: read: connection reset by peer
	I0526 21:55:40.892085  558359 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0526 21:55:40.892231  558359 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0526 21:55:40.916356  558359 cri.go:76] found id: "70a9f6d8d74a3211cc4dc6c780438794c8afcbe275cd9ffb885e422311bfe3fc"
	I0526 21:55:40.916375  558359 cri.go:76] found id: "c34960d806915476e1bb78b2a5607a38effe38c948cee59060881b1cc268076d"
	I0526 21:55:40.916380  558359 cri.go:76] found id: ""
	I0526 21:55:40.916386  558359 logs.go:270] 2 containers: [70a9f6d8d74a3211cc4dc6c780438794c8afcbe275cd9ffb885e422311bfe3fc c34960d806915476e1bb78b2a5607a38effe38c948cee59060881b1cc268076d]
	I0526 21:55:40.916423  558359 ssh_runner.go:149] Run: which crictl
	I0526 21:55:40.920524  558359 ssh_runner.go:149] Run: which crictl
	I0526 21:55:40.924581  558359 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0526 21:55:40.924634  558359 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=etcd
	I0526 21:55:40.941600  558359 cri.go:76] found id: "0dd6df36a3eee7921165299f6e31311d260b52d58be6d96b91e3d1899e4035ec"
	I0526 21:55:40.941616  558359 cri.go:76] found id: ""
	I0526 21:55:40.941621  558359 logs.go:270] 1 containers: [0dd6df36a3eee7921165299f6e31311d260b52d58be6d96b91e3d1899e4035ec]
	I0526 21:55:40.941649  558359 ssh_runner.go:149] Run: which crictl
	I0526 21:55:40.945790  558359 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0526 21:55:40.945847  558359 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=coredns
	I0526 21:55:40.962231  558359 cri.go:76] found id: ""
	I0526 21:55:40.962259  558359 logs.go:270] 0 containers: []
	W0526 21:55:40.962266  558359 logs.go:272] No container was found matching "coredns"
	I0526 21:55:40.962273  558359 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0526 21:55:40.962312  558359 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0526 21:55:40.978158  558359 cri.go:76] found id: "2aa93735f456937dd6bc96ddfe4320c6be1c44f039abe2a485527ad8b14ace67"
	I0526 21:55:40.978175  558359 cri.go:76] found id: ""
	I0526 21:55:40.978182  558359 logs.go:270] 1 containers: [2aa93735f456937dd6bc96ddfe4320c6be1c44f039abe2a485527ad8b14ace67]
	I0526 21:55:40.978214  558359 ssh_runner.go:149] Run: which crictl
	I0526 21:55:40.983160  558359 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0526 21:55:40.983210  558359 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0526 21:55:41.002278  558359 cri.go:76] found id: ""
	I0526 21:55:41.002291  558359 logs.go:270] 0 containers: []
	W0526 21:55:41.002296  558359 logs.go:272] No container was found matching "kube-proxy"
	I0526 21:55:41.002301  558359 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0526 21:55:41.002341  558359 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0526 21:55:41.022484  558359 cri.go:76] found id: ""
	I0526 21:55:41.022502  558359 logs.go:270] 0 containers: []
	W0526 21:55:41.022509  558359 logs.go:272] No container was found matching "kubernetes-dashboard"
	I0526 21:55:41.022515  558359 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0526 21:55:41.022557  558359 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0526 21:55:41.038816  558359 cri.go:76] found id: ""
	I0526 21:55:41.038838  558359 logs.go:270] 0 containers: []
	W0526 21:55:41.038845  558359 logs.go:272] No container was found matching "storage-provisioner"
	I0526 21:55:41.038853  558359 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0526 21:55:41.038900  558359 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0526 21:55:41.059759  558359 cri.go:76] found id: "a15a1724ee7df22edc8ef24287d9832227b7ea84fa38b502218aca961700fd1f"
	I0526 21:55:41.059782  558359 cri.go:76] found id: ""
	I0526 21:55:41.059789  558359 logs.go:270] 1 containers: [a15a1724ee7df22edc8ef24287d9832227b7ea84fa38b502218aca961700fd1f]
	I0526 21:55:41.059839  558359 ssh_runner.go:149] Run: which crictl
	I0526 21:55:41.065014  558359 logs.go:123] Gathering logs for etcd [0dd6df36a3eee7921165299f6e31311d260b52d58be6d96b91e3d1899e4035ec] ...
	I0526 21:55:41.065081  558359 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 0dd6df36a3eee7921165299f6e31311d260b52d58be6d96b91e3d1899e4035ec"
	I0526 21:55:41.103248  558359 logs.go:123] Gathering logs for kube-controller-manager [a15a1724ee7df22edc8ef24287d9832227b7ea84fa38b502218aca961700fd1f] ...
	I0526 21:55:41.103277  558359 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 a15a1724ee7df22edc8ef24287d9832227b7ea84fa38b502218aca961700fd1f"
	I0526 21:55:41.149256  558359 logs.go:123] Gathering logs for containerd ...
	I0526 21:55:41.149283  558359 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0526 21:55:41.231440  558359 logs.go:123] Gathering logs for container status ...
	I0526 21:55:41.231469  558359 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0526 21:55:41.255388  558359 logs.go:123] Gathering logs for describe nodes ...
	I0526 21:55:41.255410  558359 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0526 21:55:41.341167  558359 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0526 21:55:41.341196  558359 logs.go:123] Gathering logs for kube-apiserver [70a9f6d8d74a3211cc4dc6c780438794c8afcbe275cd9ffb885e422311bfe3fc] ...
	I0526 21:55:41.341210  558359 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 70a9f6d8d74a3211cc4dc6c780438794c8afcbe275cd9ffb885e422311bfe3fc"
	I0526 21:55:41.365160  558359 logs.go:123] Gathering logs for kube-apiserver [c34960d806915476e1bb78b2a5607a38effe38c948cee59060881b1cc268076d] ...
	I0526 21:55:41.365195  558359 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 c34960d806915476e1bb78b2a5607a38effe38c948cee59060881b1cc268076d"
	W0526 21:55:41.383432  558359 logs.go:130] failed kube-apiserver [c34960d806915476e1bb78b2a5607a38effe38c948cee59060881b1cc268076d]: command: /bin/bash -c "sudo /bin/crictl logs --tail 400 c34960d806915476e1bb78b2a5607a38effe38c948cee59060881b1cc268076d" /bin/bash -c "sudo /bin/crictl logs --tail 400 c34960d806915476e1bb78b2a5607a38effe38c948cee59060881b1cc268076d": Process exited with status 1
	stdout:
	
	stderr:
	E0526 21:55:41.384349    7375 remote_runtime.go:295] ContainerStatus "c34960d806915476e1bb78b2a5607a38effe38c948cee59060881b1cc268076d" from runtime service failed: rpc error: code = Unknown desc = an error occurred when try to find container "c34960d806915476e1bb78b2a5607a38effe38c948cee59060881b1cc268076d": does not exist
	time="2021-05-26T21:55:41Z" level=fatal msg="rpc error: code = Unknown desc = an error occurred when try to find container \"c34960d806915476e1bb78b2a5607a38effe38c948cee59060881b1cc268076d\": does not exist"
	 output: 
	** stderr ** 
	E0526 21:55:41.384349    7375 remote_runtime.go:295] ContainerStatus "c34960d806915476e1bb78b2a5607a38effe38c948cee59060881b1cc268076d" from runtime service failed: rpc error: code = Unknown desc = an error occurred when try to find container "c34960d806915476e1bb78b2a5607a38effe38c948cee59060881b1cc268076d": does not exist
	time="2021-05-26T21:55:41Z" level=fatal msg="rpc error: code = Unknown desc = an error occurred when try to find container \"c34960d806915476e1bb78b2a5607a38effe38c948cee59060881b1cc268076d\": does not exist"
	
	** /stderr **
	I0526 21:55:41.383453  558359 logs.go:123] Gathering logs for kubelet ...
	I0526 21:55:41.383463  558359 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0526 21:55:41.448701  558359 logs.go:138] Found kubelet problem: May 26 21:55:41 running-upgrade-20210526215018-510955 kubelet[6007]: E0526 21:55:41.114522    6007 pod_workers.go:191] Error syncing pod 52d3bd95fc93c917a5a0edc6e16385fc ("kube-apiserver-running-upgrade-20210526215018-510955_kube-system(52d3bd95fc93c917a5a0edc6e16385fc)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 40s restarting failed container=kube-apiserver pod=kube-apiserver-running-upgrade-20210526215018-510955_kube-system(52d3bd95fc93c917a5a0edc6e16385fc)"
	I0526 21:55:41.449105  558359 logs.go:123] Gathering logs for dmesg ...
	I0526 21:55:41.449122  558359 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0526 21:55:41.460680  558359 logs.go:123] Gathering logs for kube-scheduler [2aa93735f456937dd6bc96ddfe4320c6be1c44f039abe2a485527ad8b14ace67] ...
	I0526 21:55:41.460714  558359 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 2aa93735f456937dd6bc96ddfe4320c6be1c44f039abe2a485527ad8b14ace67"
	I0526 21:55:41.494946  558359 out.go:304] Setting ErrFile to fd 2...
	I0526 21:55:41.494969  558359 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0526 21:55:41.495075  558359 out.go:235] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0526 21:55:41.495089  558359 out.go:235]   May 26 21:55:41 running-upgrade-20210526215018-510955 kubelet[6007]: E0526 21:55:41.114522    6007 pod_workers.go:191] Error syncing pod 52d3bd95fc93c917a5a0edc6e16385fc ("kube-apiserver-running-upgrade-20210526215018-510955_kube-system(52d3bd95fc93c917a5a0edc6e16385fc)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 40s restarting failed container=kube-apiserver pod=kube-apiserver-running-upgrade-20210526215018-510955_kube-system(52d3bd95fc93c917a5a0edc6e16385fc)"
	  May 26 21:55:41 running-upgrade-20210526215018-510955 kubelet[6007]: E0526 21:55:41.114522    6007 pod_workers.go:191] Error syncing pod 52d3bd95fc93c917a5a0edc6e16385fc ("kube-apiserver-running-upgrade-20210526215018-510955_kube-system(52d3bd95fc93c917a5a0edc6e16385fc)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 40s restarting failed container=kube-apiserver pod=kube-apiserver-running-upgrade-20210526215018-510955_kube-system(52d3bd95fc93c917a5a0edc6e16385fc)"
	I0526 21:55:41.495096  558359 out.go:304] Setting ErrFile to fd 2...
	I0526 21:55:41.495103  558359 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0526 21:55:51.496904  558359 api_server.go:223] Checking apiserver healthz at https://192.168.50.63:8443/healthz ...
	I0526 21:56:04.785120  558359 api_server.go:239] stopped: https://192.168.50.63:8443/healthz: Get "https://192.168.50.63:8443/healthz": read tcp 192.168.50.1:58564->192.168.50.63:8443: read: connection reset by peer
	I0526 21:56:04.891995  558359 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0526 21:56:04.892089  558359 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0526 21:56:04.917864  558359 cri.go:76] found id: "ca4cef15a3d4827db221e36076490814217bd075d367f85b43b8d65bb43b7a11"
	I0526 21:56:04.917893  558359 cri.go:76] found id: "70a9f6d8d74a3211cc4dc6c780438794c8afcbe275cd9ffb885e422311bfe3fc"
	I0526 21:56:04.917901  558359 cri.go:76] found id: ""
	I0526 21:56:04.917908  558359 logs.go:270] 2 containers: [ca4cef15a3d4827db221e36076490814217bd075d367f85b43b8d65bb43b7a11 70a9f6d8d74a3211cc4dc6c780438794c8afcbe275cd9ffb885e422311bfe3fc]
	I0526 21:56:04.917975  558359 ssh_runner.go:149] Run: which crictl
	I0526 21:56:04.923303  558359 ssh_runner.go:149] Run: which crictl
	I0526 21:56:04.928151  558359 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0526 21:56:04.928210  558359 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=etcd
	I0526 21:56:04.953655  558359 cri.go:76] found id: "0dd6df36a3eee7921165299f6e31311d260b52d58be6d96b91e3d1899e4035ec"
	I0526 21:56:04.953684  558359 cri.go:76] found id: ""
	I0526 21:56:04.953693  558359 logs.go:270] 1 containers: [0dd6df36a3eee7921165299f6e31311d260b52d58be6d96b91e3d1899e4035ec]
	I0526 21:56:04.953743  558359 ssh_runner.go:149] Run: which crictl
	I0526 21:56:04.959430  558359 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0526 21:56:04.959485  558359 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=coredns
	I0526 21:56:04.979612  558359 cri.go:76] found id: ""
	I0526 21:56:04.979636  558359 logs.go:270] 0 containers: []
	W0526 21:56:04.979642  558359 logs.go:272] No container was found matching "coredns"
	I0526 21:56:04.979649  558359 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0526 21:56:04.979693  558359 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0526 21:56:04.997875  558359 cri.go:76] found id: "2aa93735f456937dd6bc96ddfe4320c6be1c44f039abe2a485527ad8b14ace67"
	I0526 21:56:04.997895  558359 cri.go:76] found id: ""
	I0526 21:56:04.997903  558359 logs.go:270] 1 containers: [2aa93735f456937dd6bc96ddfe4320c6be1c44f039abe2a485527ad8b14ace67]
	I0526 21:56:04.997944  558359 ssh_runner.go:149] Run: which crictl
	I0526 21:56:05.003943  558359 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0526 21:56:05.003992  558359 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0526 21:56:05.031897  558359 cri.go:76] found id: ""
	I0526 21:56:05.031959  558359 logs.go:270] 0 containers: []
	W0526 21:56:05.031975  558359 logs.go:272] No container was found matching "kube-proxy"
	I0526 21:56:05.031983  558359 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0526 21:56:05.032090  558359 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0526 21:56:05.055311  558359 cri.go:76] found id: ""
	I0526 21:56:05.055335  558359 logs.go:270] 0 containers: []
	W0526 21:56:05.055342  558359 logs.go:272] No container was found matching "kubernetes-dashboard"
	I0526 21:56:05.055349  558359 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0526 21:56:05.055400  558359 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0526 21:56:05.076046  558359 cri.go:76] found id: ""
	I0526 21:56:05.076069  558359 logs.go:270] 0 containers: []
	W0526 21:56:05.076077  558359 logs.go:272] No container was found matching "storage-provisioner"
	I0526 21:56:05.076085  558359 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0526 21:56:05.076135  558359 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0526 21:56:05.102880  558359 cri.go:76] found id: "a15a1724ee7df22edc8ef24287d9832227b7ea84fa38b502218aca961700fd1f"
	I0526 21:56:05.102902  558359 cri.go:76] found id: ""
	I0526 21:56:05.102909  558359 logs.go:270] 1 containers: [a15a1724ee7df22edc8ef24287d9832227b7ea84fa38b502218aca961700fd1f]
	I0526 21:56:05.102958  558359 ssh_runner.go:149] Run: which crictl
	I0526 21:56:05.107360  558359 logs.go:123] Gathering logs for dmesg ...
	I0526 21:56:05.107391  558359 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0526 21:56:05.120364  558359 logs.go:123] Gathering logs for describe nodes ...
	I0526 21:56:05.120391  558359 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0526 21:56:05.221164  558359 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0526 21:56:05.221196  558359 logs.go:123] Gathering logs for kube-apiserver [ca4cef15a3d4827db221e36076490814217bd075d367f85b43b8d65bb43b7a11] ...
	I0526 21:56:05.221214  558359 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 ca4cef15a3d4827db221e36076490814217bd075d367f85b43b8d65bb43b7a11"
	I0526 21:56:05.253572  558359 logs.go:123] Gathering logs for kube-controller-manager [a15a1724ee7df22edc8ef24287d9832227b7ea84fa38b502218aca961700fd1f] ...
	I0526 21:56:05.253610  558359 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 a15a1724ee7df22edc8ef24287d9832227b7ea84fa38b502218aca961700fd1f"
	I0526 21:56:05.291258  558359 logs.go:123] Gathering logs for container status ...
	I0526 21:56:05.291293  558359 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0526 21:56:05.319051  558359 logs.go:123] Gathering logs for kubelet ...
	I0526 21:56:05.319082  558359 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0526 21:56:05.389746  558359 logs.go:138] Found kubelet problem: May 26 21:56:05 running-upgrade-20210526215018-510955 kubelet[7404]: E0526 21:56:05.028495    7404 pod_workers.go:191] Error syncing pod 52d3bd95fc93c917a5a0edc6e16385fc ("kube-apiserver-running-upgrade-20210526215018-510955_kube-system(52d3bd95fc93c917a5a0edc6e16385fc)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-running-upgrade-20210526215018-510955_kube-system(52d3bd95fc93c917a5a0edc6e16385fc)"
	I0526 21:56:05.390006  558359 logs.go:123] Gathering logs for kube-apiserver [70a9f6d8d74a3211cc4dc6c780438794c8afcbe275cd9ffb885e422311bfe3fc] ...
	I0526 21:56:05.390023  558359 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 70a9f6d8d74a3211cc4dc6c780438794c8afcbe275cd9ffb885e422311bfe3fc"
	W0526 21:56:05.413140  558359 logs.go:130] failed kube-apiserver [70a9f6d8d74a3211cc4dc6c780438794c8afcbe275cd9ffb885e422311bfe3fc]: command: /bin/bash -c "sudo /bin/crictl logs --tail 400 70a9f6d8d74a3211cc4dc6c780438794c8afcbe275cd9ffb885e422311bfe3fc" /bin/bash -c "sudo /bin/crictl logs --tail 400 70a9f6d8d74a3211cc4dc6c780438794c8afcbe275cd9ffb885e422311bfe3fc": Process exited with status 1
	stdout:
	
	stderr:
	E0526 21:56:05.413480    7699 remote_runtime.go:295] ContainerStatus "70a9f6d8d74a3211cc4dc6c780438794c8afcbe275cd9ffb885e422311bfe3fc" from runtime service failed: rpc error: code = Unknown desc = an error occurred when try to find container "70a9f6d8d74a3211cc4dc6c780438794c8afcbe275cd9ffb885e422311bfe3fc": does not exist
	time="2021-05-26T21:56:05Z" level=fatal msg="rpc error: code = Unknown desc = an error occurred when try to find container \"70a9f6d8d74a3211cc4dc6c780438794c8afcbe275cd9ffb885e422311bfe3fc\": does not exist"
	 output: 
	** stderr ** 
	E0526 21:56:05.413480    7699 remote_runtime.go:295] ContainerStatus "70a9f6d8d74a3211cc4dc6c780438794c8afcbe275cd9ffb885e422311bfe3fc" from runtime service failed: rpc error: code = Unknown desc = an error occurred when try to find container "70a9f6d8d74a3211cc4dc6c780438794c8afcbe275cd9ffb885e422311bfe3fc": does not exist
	time="2021-05-26T21:56:05Z" level=fatal msg="rpc error: code = Unknown desc = an error occurred when try to find container \"70a9f6d8d74a3211cc4dc6c780438794c8afcbe275cd9ffb885e422311bfe3fc\": does not exist"
	
	** /stderr **
	I0526 21:56:05.413162  558359 logs.go:123] Gathering logs for etcd [0dd6df36a3eee7921165299f6e31311d260b52d58be6d96b91e3d1899e4035ec] ...
	I0526 21:56:05.413174  558359 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 0dd6df36a3eee7921165299f6e31311d260b52d58be6d96b91e3d1899e4035ec"
	I0526 21:56:05.446811  558359 logs.go:123] Gathering logs for kube-scheduler [2aa93735f456937dd6bc96ddfe4320c6be1c44f039abe2a485527ad8b14ace67] ...
	I0526 21:56:05.446856  558359 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 2aa93735f456937dd6bc96ddfe4320c6be1c44f039abe2a485527ad8b14ace67"
	I0526 21:56:05.480850  558359 logs.go:123] Gathering logs for containerd ...
	I0526 21:56:05.480903  558359 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0526 21:56:05.559240  558359 out.go:304] Setting ErrFile to fd 2...
	I0526 21:56:05.559272  558359 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0526 21:56:05.559556  558359 out.go:235] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0526 21:56:05.559575  558359 out.go:235]   May 26 21:56:05 running-upgrade-20210526215018-510955 kubelet[7404]: E0526 21:56:05.028495    7404 pod_workers.go:191] Error syncing pod 52d3bd95fc93c917a5a0edc6e16385fc ("kube-apiserver-running-upgrade-20210526215018-510955_kube-system(52d3bd95fc93c917a5a0edc6e16385fc)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-running-upgrade-20210526215018-510955_kube-system(52d3bd95fc93c917a5a0edc6e16385fc)"
	  May 26 21:56:05 running-upgrade-20210526215018-510955 kubelet[7404]: E0526 21:56:05.028495    7404 pod_workers.go:191] Error syncing pod 52d3bd95fc93c917a5a0edc6e16385fc ("kube-apiserver-running-upgrade-20210526215018-510955_kube-system(52d3bd95fc93c917a5a0edc6e16385fc)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-running-upgrade-20210526215018-510955_kube-system(52d3bd95fc93c917a5a0edc6e16385fc)"
	I0526 21:56:05.559583  558359 out.go:304] Setting ErrFile to fd 2...
	I0526 21:56:05.559593  558359 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0526 21:56:15.561501  558359 api_server.go:223] Checking apiserver healthz at https://192.168.50.63:8443/healthz ...
	I0526 21:56:15.562151  558359 api_server.go:239] stopped: https://192.168.50.63:8443/healthz: Get "https://192.168.50.63:8443/healthz": dial tcp 192.168.50.63:8443: connect: connection refused
	I0526 21:56:15.891222  558359 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0526 21:56:15.891304  558359 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0526 21:56:15.908153  558359 cri.go:76] found id: "ca4cef15a3d4827db221e36076490814217bd075d367f85b43b8d65bb43b7a11"
	I0526 21:56:15.908182  558359 cri.go:76] found id: ""
	I0526 21:56:15.908190  558359 logs.go:270] 1 containers: [ca4cef15a3d4827db221e36076490814217bd075d367f85b43b8d65bb43b7a11]
	I0526 21:56:15.908252  558359 ssh_runner.go:149] Run: which crictl
	I0526 21:56:15.912979  558359 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0526 21:56:15.913033  558359 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=etcd
	I0526 21:56:15.930269  558359 cri.go:76] found id: "0dd6df36a3eee7921165299f6e31311d260b52d58be6d96b91e3d1899e4035ec"
	I0526 21:56:15.930294  558359 cri.go:76] found id: ""
	I0526 21:56:15.930302  558359 logs.go:270] 1 containers: [0dd6df36a3eee7921165299f6e31311d260b52d58be6d96b91e3d1899e4035ec]
	I0526 21:56:15.930343  558359 ssh_runner.go:149] Run: which crictl
	I0526 21:56:15.934985  558359 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0526 21:56:15.935040  558359 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=coredns
	I0526 21:56:15.952913  558359 cri.go:76] found id: ""
	I0526 21:56:15.952933  558359 logs.go:270] 0 containers: []
	W0526 21:56:15.952940  558359 logs.go:272] No container was found matching "coredns"
	I0526 21:56:15.952946  558359 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0526 21:56:15.952995  558359 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0526 21:56:15.970476  558359 cri.go:76] found id: "2aa93735f456937dd6bc96ddfe4320c6be1c44f039abe2a485527ad8b14ace67"
	I0526 21:56:15.970492  558359 cri.go:76] found id: ""
	I0526 21:56:15.970498  558359 logs.go:270] 1 containers: [2aa93735f456937dd6bc96ddfe4320c6be1c44f039abe2a485527ad8b14ace67]
	I0526 21:56:15.970532  558359 ssh_runner.go:149] Run: which crictl
	I0526 21:56:15.975011  558359 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0526 21:56:15.975062  558359 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0526 21:56:15.989818  558359 cri.go:76] found id: ""
	I0526 21:56:15.989834  558359 logs.go:270] 0 containers: []
	W0526 21:56:15.989840  558359 logs.go:272] No container was found matching "kube-proxy"
	I0526 21:56:15.989845  558359 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0526 21:56:15.989879  558359 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0526 21:56:16.010720  558359 cri.go:76] found id: ""
	I0526 21:56:16.010738  558359 logs.go:270] 0 containers: []
	W0526 21:56:16.010744  558359 logs.go:272] No container was found matching "kubernetes-dashboard"
	I0526 21:56:16.010750  558359 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0526 21:56:16.010782  558359 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0526 21:56:16.027213  558359 cri.go:76] found id: ""
	I0526 21:56:16.027231  558359 logs.go:270] 0 containers: []
	W0526 21:56:16.027236  558359 logs.go:272] No container was found matching "storage-provisioner"
	I0526 21:56:16.027241  558359 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0526 21:56:16.027282  558359 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0526 21:56:16.043383  558359 cri.go:76] found id: "a15a1724ee7df22edc8ef24287d9832227b7ea84fa38b502218aca961700fd1f"
	I0526 21:56:16.043405  558359 cri.go:76] found id: ""
	I0526 21:56:16.043413  558359 logs.go:270] 1 containers: [a15a1724ee7df22edc8ef24287d9832227b7ea84fa38b502218aca961700fd1f]
	I0526 21:56:16.043457  558359 ssh_runner.go:149] Run: which crictl
	I0526 21:56:16.047921  558359 logs.go:123] Gathering logs for kubelet ...
	I0526 21:56:16.047939  558359 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0526 21:56:16.094653  558359 logs.go:138] Found kubelet problem: May 26 21:56:05 running-upgrade-20210526215018-510955 kubelet[7404]: E0526 21:56:05.028495    7404 pod_workers.go:191] Error syncing pod 52d3bd95fc93c917a5a0edc6e16385fc ("kube-apiserver-running-upgrade-20210526215018-510955_kube-system(52d3bd95fc93c917a5a0edc6e16385fc)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-running-upgrade-20210526215018-510955_kube-system(52d3bd95fc93c917a5a0edc6e16385fc)"
	W0526 21:56:16.106999  558359 logs.go:138] Found kubelet problem: May 26 21:56:10 running-upgrade-20210526215018-510955 kubelet[7404]: E0526 21:56:10.743635    7404 pod_workers.go:191] Error syncing pod 52d3bd95fc93c917a5a0edc6e16385fc ("kube-apiserver-running-upgrade-20210526215018-510955_kube-system(52d3bd95fc93c917a5a0edc6e16385fc)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-running-upgrade-20210526215018-510955_kube-system(52d3bd95fc93c917a5a0edc6e16385fc)"
	I0526 21:56:16.118298  558359 logs.go:123] Gathering logs for describe nodes ...
	I0526 21:56:16.118326  558359 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0526 21:56:16.197771  558359 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0526 21:56:16.197798  558359 logs.go:123] Gathering logs for etcd [0dd6df36a3eee7921165299f6e31311d260b52d58be6d96b91e3d1899e4035ec] ...
	I0526 21:56:16.197810  558359 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 0dd6df36a3eee7921165299f6e31311d260b52d58be6d96b91e3d1899e4035ec"
	I0526 21:56:16.225050  558359 logs.go:123] Gathering logs for kube-controller-manager [a15a1724ee7df22edc8ef24287d9832227b7ea84fa38b502218aca961700fd1f] ...
	I0526 21:56:16.225080  558359 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 a15a1724ee7df22edc8ef24287d9832227b7ea84fa38b502218aca961700fd1f"
	I0526 21:56:16.262701  558359 logs.go:123] Gathering logs for containerd ...
	I0526 21:56:16.262744  558359 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0526 21:56:16.329480  558359 logs.go:123] Gathering logs for dmesg ...
	I0526 21:56:16.329513  558359 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0526 21:56:16.341020  558359 logs.go:123] Gathering logs for kube-apiserver [ca4cef15a3d4827db221e36076490814217bd075d367f85b43b8d65bb43b7a11] ...
	I0526 21:56:16.341046  558359 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 ca4cef15a3d4827db221e36076490814217bd075d367f85b43b8d65bb43b7a11"
	I0526 21:56:16.371525  558359 logs.go:123] Gathering logs for kube-scheduler [2aa93735f456937dd6bc96ddfe4320c6be1c44f039abe2a485527ad8b14ace67] ...
	I0526 21:56:16.371557  558359 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 2aa93735f456937dd6bc96ddfe4320c6be1c44f039abe2a485527ad8b14ace67"
	I0526 21:56:16.410326  558359 logs.go:123] Gathering logs for container status ...
	I0526 21:56:16.410359  558359 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0526 21:56:16.435110  558359 out.go:304] Setting ErrFile to fd 2...
	I0526 21:56:16.435138  558359 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0526 21:56:16.435275  558359 out.go:235] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0526 21:56:16.435299  558359 out.go:235]   May 26 21:56:05 running-upgrade-20210526215018-510955 kubelet[7404]: E0526 21:56:05.028495    7404 pod_workers.go:191] Error syncing pod 52d3bd95fc93c917a5a0edc6e16385fc ("kube-apiserver-running-upgrade-20210526215018-510955_kube-system(52d3bd95fc93c917a5a0edc6e16385fc)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-running-upgrade-20210526215018-510955_kube-system(52d3bd95fc93c917a5a0edc6e16385fc)"
	  May 26 21:56:05 running-upgrade-20210526215018-510955 kubelet[7404]: E0526 21:56:05.028495    7404 pod_workers.go:191] Error syncing pod 52d3bd95fc93c917a5a0edc6e16385fc ("kube-apiserver-running-upgrade-20210526215018-510955_kube-system(52d3bd95fc93c917a5a0edc6e16385fc)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-running-upgrade-20210526215018-510955_kube-system(52d3bd95fc93c917a5a0edc6e16385fc)"
	W0526 21:56:16.435308  558359 out.go:235]   May 26 21:56:10 running-upgrade-20210526215018-510955 kubelet[7404]: E0526 21:56:10.743635    7404 pod_workers.go:191] Error syncing pod 52d3bd95fc93c917a5a0edc6e16385fc ("kube-apiserver-running-upgrade-20210526215018-510955_kube-system(52d3bd95fc93c917a5a0edc6e16385fc)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-running-upgrade-20210526215018-510955_kube-system(52d3bd95fc93c917a5a0edc6e16385fc)"
	  May 26 21:56:10 running-upgrade-20210526215018-510955 kubelet[7404]: E0526 21:56:10.743635    7404 pod_workers.go:191] Error syncing pod 52d3bd95fc93c917a5a0edc6e16385fc ("kube-apiserver-running-upgrade-20210526215018-510955_kube-system(52d3bd95fc93c917a5a0edc6e16385fc)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-running-upgrade-20210526215018-510955_kube-system(52d3bd95fc93c917a5a0edc6e16385fc)"
	I0526 21:56:16.435320  558359 out.go:304] Setting ErrFile to fd 2...
	I0526 21:56:16.435327  558359 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0526 21:56:26.436089  558359 api_server.go:223] Checking apiserver healthz at https://192.168.50.63:8443/healthz ...
	I0526 21:56:46.752994  558359 api_server.go:239] stopped: https://192.168.50.63:8443/healthz: Get "https://192.168.50.63:8443/healthz": read tcp 192.168.50.1:58608->192.168.50.63:8443: read: connection reset by peer
	I0526 21:56:46.892000  558359 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0526 21:56:46.892074  558359 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0526 21:56:46.910467  558359 cri.go:76] found id: "171d5eced9661a9fc9f577439216869fd1c61bf91f7f119967402ff2eed9a0f4"
	I0526 21:56:46.910485  558359 cri.go:76] found id: "ca4cef15a3d4827db221e36076490814217bd075d367f85b43b8d65bb43b7a11"
	I0526 21:56:46.910490  558359 cri.go:76] found id: ""
	I0526 21:56:46.910495  558359 logs.go:270] 2 containers: [171d5eced9661a9fc9f577439216869fd1c61bf91f7f119967402ff2eed9a0f4 ca4cef15a3d4827db221e36076490814217bd075d367f85b43b8d65bb43b7a11]
	I0526 21:56:46.910536  558359 ssh_runner.go:149] Run: which crictl
	I0526 21:56:46.914544  558359 ssh_runner.go:149] Run: which crictl
	I0526 21:56:46.918829  558359 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0526 21:56:46.918884  558359 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=etcd
	I0526 21:56:46.941456  558359 cri.go:76] found id: "0dd6df36a3eee7921165299f6e31311d260b52d58be6d96b91e3d1899e4035ec"
	I0526 21:56:46.941475  558359 cri.go:76] found id: ""
	I0526 21:56:46.941483  558359 logs.go:270] 1 containers: [0dd6df36a3eee7921165299f6e31311d260b52d58be6d96b91e3d1899e4035ec]
	I0526 21:56:46.941514  558359 ssh_runner.go:149] Run: which crictl
	I0526 21:56:46.945440  558359 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0526 21:56:46.945500  558359 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=coredns
	I0526 21:56:46.962133  558359 cri.go:76] found id: ""
	I0526 21:56:46.962149  558359 logs.go:270] 0 containers: []
	W0526 21:56:46.962154  558359 logs.go:272] No container was found matching "coredns"
	I0526 21:56:46.962159  558359 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0526 21:56:46.962189  558359 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0526 21:56:46.978644  558359 cri.go:76] found id: "2aa93735f456937dd6bc96ddfe4320c6be1c44f039abe2a485527ad8b14ace67"
	I0526 21:56:46.978660  558359 cri.go:76] found id: ""
	I0526 21:56:46.978666  558359 logs.go:270] 1 containers: [2aa93735f456937dd6bc96ddfe4320c6be1c44f039abe2a485527ad8b14ace67]
	I0526 21:56:46.978696  558359 ssh_runner.go:149] Run: which crictl
	I0526 21:56:46.982607  558359 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0526 21:56:46.982644  558359 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0526 21:56:46.998060  558359 cri.go:76] found id: ""
	I0526 21:56:46.998081  558359 logs.go:270] 0 containers: []
	W0526 21:56:46.998086  558359 logs.go:272] No container was found matching "kube-proxy"
	I0526 21:56:46.998091  558359 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0526 21:56:46.998129  558359 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0526 21:56:47.014961  558359 cri.go:76] found id: ""
	I0526 21:56:47.014980  558359 logs.go:270] 0 containers: []
	W0526 21:56:47.014985  558359 logs.go:272] No container was found matching "kubernetes-dashboard"
	I0526 21:56:47.014991  558359 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0526 21:56:47.015023  558359 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0526 21:56:47.031625  558359 cri.go:76] found id: ""
	I0526 21:56:47.031650  558359 logs.go:270] 0 containers: []
	W0526 21:56:47.031655  558359 logs.go:272] No container was found matching "storage-provisioner"
	I0526 21:56:47.031660  558359 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0526 21:56:47.031691  558359 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0526 21:56:47.047987  558359 cri.go:76] found id: "a15a1724ee7df22edc8ef24287d9832227b7ea84fa38b502218aca961700fd1f"
	I0526 21:56:47.048005  558359 cri.go:76] found id: ""
	I0526 21:56:47.048011  558359 logs.go:270] 1 containers: [a15a1724ee7df22edc8ef24287d9832227b7ea84fa38b502218aca961700fd1f]
	I0526 21:56:47.048042  558359 ssh_runner.go:149] Run: which crictl
	I0526 21:56:47.052704  558359 logs.go:123] Gathering logs for dmesg ...
	I0526 21:56:47.052721  558359 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0526 21:56:47.063039  558359 logs.go:123] Gathering logs for describe nodes ...
	I0526 21:56:47.063063  558359 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0526 21:56:47.152925  558359 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0526 21:56:47.152949  558359 logs.go:123] Gathering logs for kube-scheduler [2aa93735f456937dd6bc96ddfe4320c6be1c44f039abe2a485527ad8b14ace67] ...
	I0526 21:56:47.152960  558359 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 2aa93735f456937dd6bc96ddfe4320c6be1c44f039abe2a485527ad8b14ace67"
	I0526 21:56:47.199172  558359 logs.go:123] Gathering logs for kubelet ...
	I0526 21:56:47.199215  558359 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0526 21:56:47.280326  558359 logs.go:138] Found kubelet problem: May 26 21:56:47 running-upgrade-20210526215018-510955 kubelet[7404]: E0526 21:56:47.160335    7404 pod_workers.go:191] Error syncing pod 52d3bd95fc93c917a5a0edc6e16385fc ("kube-apiserver-running-upgrade-20210526215018-510955_kube-system(52d3bd95fc93c917a5a0edc6e16385fc)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-running-upgrade-20210526215018-510955_kube-system(52d3bd95fc93c917a5a0edc6e16385fc)"
	I0526 21:56:47.280361  558359 logs.go:123] Gathering logs for kube-apiserver [171d5eced9661a9fc9f577439216869fd1c61bf91f7f119967402ff2eed9a0f4] ...
	I0526 21:56:47.280378  558359 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 171d5eced9661a9fc9f577439216869fd1c61bf91f7f119967402ff2eed9a0f4"
	I0526 21:56:47.313469  558359 logs.go:123] Gathering logs for kube-apiserver [ca4cef15a3d4827db221e36076490814217bd075d367f85b43b8d65bb43b7a11] ...
	I0526 21:56:47.313504  558359 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 ca4cef15a3d4827db221e36076490814217bd075d367f85b43b8d65bb43b7a11"
	W0526 21:56:47.332713  558359 logs.go:130] failed kube-apiserver [ca4cef15a3d4827db221e36076490814217bd075d367f85b43b8d65bb43b7a11]: command: /bin/bash -c "sudo /bin/crictl logs --tail 400 ca4cef15a3d4827db221e36076490814217bd075d367f85b43b8d65bb43b7a11" /bin/bash -c "sudo /bin/crictl logs --tail 400 ca4cef15a3d4827db221e36076490814217bd075d367f85b43b8d65bb43b7a11": Process exited with status 1
	stdout:
	
	stderr:
	E0526 21:56:47.333578    8319 remote_runtime.go:295] ContainerStatus "ca4cef15a3d4827db221e36076490814217bd075d367f85b43b8d65bb43b7a11" from runtime service failed: rpc error: code = Unknown desc = an error occurred when try to find container "ca4cef15a3d4827db221e36076490814217bd075d367f85b43b8d65bb43b7a11": does not exist
	time="2021-05-26T21:56:47Z" level=fatal msg="rpc error: code = Unknown desc = an error occurred when try to find container \"ca4cef15a3d4827db221e36076490814217bd075d367f85b43b8d65bb43b7a11\": does not exist"
	 output: 
	** stderr ** 
	E0526 21:56:47.333578    8319 remote_runtime.go:295] ContainerStatus "ca4cef15a3d4827db221e36076490814217bd075d367f85b43b8d65bb43b7a11" from runtime service failed: rpc error: code = Unknown desc = an error occurred when try to find container "ca4cef15a3d4827db221e36076490814217bd075d367f85b43b8d65bb43b7a11": does not exist
	time="2021-05-26T21:56:47Z" level=fatal msg="rpc error: code = Unknown desc = an error occurred when try to find container \"ca4cef15a3d4827db221e36076490814217bd075d367f85b43b8d65bb43b7a11\": does not exist"
	
	** /stderr **
	I0526 21:56:47.332741  558359 logs.go:123] Gathering logs for etcd [0dd6df36a3eee7921165299f6e31311d260b52d58be6d96b91e3d1899e4035ec] ...
	I0526 21:56:47.332758  558359 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 0dd6df36a3eee7921165299f6e31311d260b52d58be6d96b91e3d1899e4035ec"
	I0526 21:56:47.363575  558359 logs.go:123] Gathering logs for kube-controller-manager [a15a1724ee7df22edc8ef24287d9832227b7ea84fa38b502218aca961700fd1f] ...
	I0526 21:56:47.363608  558359 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 a15a1724ee7df22edc8ef24287d9832227b7ea84fa38b502218aca961700fd1f"
	I0526 21:56:47.405501  558359 logs.go:123] Gathering logs for containerd ...
	I0526 21:56:47.405532  558359 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0526 21:56:47.478058  558359 logs.go:123] Gathering logs for container status ...
	I0526 21:56:47.478086  558359 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0526 21:56:47.510980  558359 out.go:304] Setting ErrFile to fd 2...
	I0526 21:56:47.511007  558359 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0526 21:56:47.511128  558359 out.go:235] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0526 21:56:47.511138  558359 out.go:235]   May 26 21:56:47 running-upgrade-20210526215018-510955 kubelet[7404]: E0526 21:56:47.160335    7404 pod_workers.go:191] Error syncing pod 52d3bd95fc93c917a5a0edc6e16385fc ("kube-apiserver-running-upgrade-20210526215018-510955_kube-system(52d3bd95fc93c917a5a0edc6e16385fc)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-running-upgrade-20210526215018-510955_kube-system(52d3bd95fc93c917a5a0edc6e16385fc)"
	  May 26 21:56:47 running-upgrade-20210526215018-510955 kubelet[7404]: E0526 21:56:47.160335    7404 pod_workers.go:191] Error syncing pod 52d3bd95fc93c917a5a0edc6e16385fc ("kube-apiserver-running-upgrade-20210526215018-510955_kube-system(52d3bd95fc93c917a5a0edc6e16385fc)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-running-upgrade-20210526215018-510955_kube-system(52d3bd95fc93c917a5a0edc6e16385fc)"
	I0526 21:56:47.511146  558359 out.go:304] Setting ErrFile to fd 2...
	I0526 21:56:47.511152  558359 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0526 21:56:57.512680  558359 api_server.go:223] Checking apiserver healthz at https://192.168.50.63:8443/healthz ...
	I0526 21:57:10.420639  558359 api_server.go:239] stopped: https://192.168.50.63:8443/healthz: Get "https://192.168.50.63:8443/healthz": read tcp 192.168.50.1:58670->192.168.50.63:8443: read: connection reset by peer
	I0526 21:57:10.420722  558359 kubeadm.go:604] restartCluster took 4m23.768417941s
	W0526 21:57:10.420848  558359 out.go:235] ! Unable to restart cluster, will reset it: apiserver health: apiserver healthz never reported healthy: cluster wait timed out during healthz check
	! Unable to restart cluster, will reset it: apiserver health: apiserver healthz never reported healthy: cluster wait timed out during healthz check
	I0526 21:57:10.420905  558359 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.17.0:$PATH kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0526 21:57:12.192749  558359 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.17.0:$PATH kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.771797679s)
	I0526 21:57:12.192829  558359 ssh_runner.go:149] Run: sudo systemctl stop -f kubelet
	I0526 21:57:12.207861  558359 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0526 21:57:12.207929  558359 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0526 21:57:12.232651  558359 cri.go:76] found id: ""
	I0526 21:57:12.232712  558359 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0526 21:57:12.242750  558359 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0526 21:57:12.250506  558359 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0526 21:57:12.250555  558359 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.17.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap"
	I0526 21:57:12.745441  558359 out.go:197]   - Generating certificates and keys ...
	I0526 21:57:13.650020  558359 out.go:197]   - Booting up control plane ...
	W0526 22:01:13.667556  558359 out.go:235] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.17.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.17.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
	W0526 21:57:12.338689    8977 validation.go:28] Cannot validate kube-proxy config - no validator is available
	W0526 21:57:12.338944    8977 validation.go:28] Cannot validate kubelet config - no validator is available
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0526 21:57:13.660475    8977 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0526 21:57:13.661765    8977 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.17.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.17.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
	W0526 21:57:12.338689    8977 validation.go:28] Cannot validate kube-proxy config - no validator is available
	W0526 21:57:12.338944    8977 validation.go:28] Cannot validate kubelet config - no validator is available
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0526 21:57:13.660475    8977 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0526 21:57:13.661765    8977 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0526 22:01:13.667629  558359 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.17.0:$PATH kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0526 22:01:14.167883  558359 ssh_runner.go:149] Run: sudo systemctl stop -f kubelet
	I0526 22:01:14.182369  558359 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0526 22:01:14.182454  558359 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0526 22:01:14.201401  558359 cri.go:76] found id: ""
	I0526 22:01:14.201493  558359 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0526 22:01:14.209413  558359 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0526 22:01:14.209455  558359 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.17.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap"
	I0526 22:01:16.575168  558359 out.go:197]   - Generating certificates and keys ...
	I0526 22:01:19.640404  558359 out.go:197]   - Booting up control plane ...
	I0526 22:05:15.805943  558359 kubeadm.go:392] StartCluster complete in 12m29.229372005s
	I0526 22:05:15.805997  558359 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0526 22:05:15.806043  558359 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0526 22:05:15.832878  558359 cri.go:76] found id: ""
	I0526 22:05:15.832899  558359 logs.go:270] 0 containers: []
	W0526 22:05:15.832907  558359 logs.go:272] No container was found matching "kube-apiserver"
	I0526 22:05:15.832915  558359 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0526 22:05:15.832977  558359 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=etcd
	I0526 22:05:15.859276  558359 cri.go:76] found id: ""
	I0526 22:05:15.859300  558359 logs.go:270] 0 containers: []
	W0526 22:05:15.859307  558359 logs.go:272] No container was found matching "etcd"
	I0526 22:05:15.859318  558359 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0526 22:05:15.859388  558359 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=coredns
	I0526 22:05:15.885218  558359 cri.go:76] found id: ""
	I0526 22:05:15.885240  558359 logs.go:270] 0 containers: []
	W0526 22:05:15.885248  558359 logs.go:272] No container was found matching "coredns"
	I0526 22:05:15.885257  558359 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0526 22:05:15.885327  558359 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0526 22:05:15.909003  558359 cri.go:76] found id: ""
	I0526 22:05:15.909024  558359 logs.go:270] 0 containers: []
	W0526 22:05:15.909031  558359 logs.go:272] No container was found matching "kube-scheduler"
	I0526 22:05:15.909038  558359 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0526 22:05:15.909087  558359 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0526 22:05:15.929574  558359 cri.go:76] found id: ""
	I0526 22:05:15.929596  558359 logs.go:270] 0 containers: []
	W0526 22:05:15.929603  558359 logs.go:272] No container was found matching "kube-proxy"
	I0526 22:05:15.929612  558359 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0526 22:05:15.929670  558359 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0526 22:05:15.955184  558359 cri.go:76] found id: ""
	I0526 22:05:15.955207  558359 logs.go:270] 0 containers: []
	W0526 22:05:15.955213  558359 logs.go:272] No container was found matching "kubernetes-dashboard"
	I0526 22:05:15.955219  558359 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0526 22:05:15.955272  558359 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0526 22:05:15.975887  558359 cri.go:76] found id: ""
	I0526 22:05:15.975920  558359 logs.go:270] 0 containers: []
	W0526 22:05:15.975930  558359 logs.go:272] No container was found matching "storage-provisioner"
	I0526 22:05:15.975940  558359 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0526 22:05:15.976005  558359 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0526 22:05:15.996208  558359 cri.go:76] found id: ""
	I0526 22:05:15.996229  558359 logs.go:270] 0 containers: []
	W0526 22:05:15.996237  558359 logs.go:272] No container was found matching "kube-controller-manager"
	I0526 22:05:15.996251  558359 logs.go:123] Gathering logs for kubelet ...
	I0526 22:05:15.996270  558359 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0526 22:05:16.076587  558359 logs.go:123] Gathering logs for dmesg ...
	I0526 22:05:16.076623  558359 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0526 22:05:16.090634  558359 logs.go:123] Gathering logs for describe nodes ...
	I0526 22:05:16.090663  558359 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0526 22:05:16.188309  558359 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0526 22:05:16.188340  558359 logs.go:123] Gathering logs for containerd ...
	I0526 22:05:16.188359  558359 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0526 22:05:16.278354  558359 logs.go:123] Gathering logs for container status ...
	I0526 22:05:16.278389  558359 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0526 22:05:16.309975  558359 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.17.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.17.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
	W0526 22:01:14.291084    9335 validation.go:28] Cannot validate kubelet config - no validator is available
	W0526 22:01:14.291241    9335 validation.go:28] Cannot validate kube-proxy config - no validator is available
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0526 22:01:15.797516    9335 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0526 22:01:15.798659    9335 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0526 22:05:16.310012  558359 out.go:235] * 
	* 
	W0526 22:05:16.310220  558359 out.go:235] X Error starting cluster: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.17.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.17.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
	W0526 22:01:14.291084    9335 validation.go:28] Cannot validate kubelet config - no validator is available
	W0526 22:01:14.291241    9335 validation.go:28] Cannot validate kube-proxy config - no validator is available
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0526 22:01:15.797516    9335 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0526 22:01:15.798659    9335 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.17.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.17.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
	W0526 22:01:14.291084    9335 validation.go:28] Cannot validate kubelet config - no validator is available
	W0526 22:01:14.291241    9335 validation.go:28] Cannot validate kube-proxy config - no validator is available
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0526 22:01:15.797516    9335 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0526 22:01:15.798659    9335 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0526 22:05:16.310248  558359 out.go:235] * 
	* 
	W0526 22:05:16.312189  558359 out.go:235] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	W0526 22:05:16.312204  558359 out.go:235] │                                                                                                                                                             │
	│                                                                                                                                                             │
	W0526 22:05:16.312213  558359 out.go:235] │    * If the above advice does not help, please let us know:                                                                                                 │
	│    * If the above advice does not help, please let us know:                                                                                                 │
	W0526 22:05:16.312219  558359 out.go:235] │      https://github.com/kubernetes/minikube/issues/new/choose                                                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                                                               │
	W0526 22:05:16.312225  558359 out.go:235] │                                                                                                                                                             │
	│                                                                                                                                                             │
	W0526 22:05:16.312238  558359 out.go:235] │    * Please attach the following file to the GitHub issue:                                                                                                  │
	│    * Please attach the following file to the GitHub issue:                                                                                                  │
	W0526 22:05:16.312248  558359 out.go:235] │    * - /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/logs/lastStart.txt    │
	│    * - /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/logs/lastStart.txt    │
	W0526 22:05:16.312256  558359 out.go:235] │                                                                                                                                                             │
	│                                                                                                                                                             │
	W0526 22:05:16.312267  558359 out.go:235] ╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	W0526 22:05:16.312277  558359 out.go:235] 
	
	I0526 22:05:16.315625  558359 out.go:170] 
	W0526 22:05:16.315854  558359 out.go:235] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.17.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.17.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
	W0526 22:01:14.291084    9335 validation.go:28] Cannot validate kubelet config - no validator is available
	W0526 22:01:14.291241    9335 validation.go:28] Cannot validate kube-proxy config - no validator is available
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0526 22:01:15.797516    9335 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0526 22:01:15.798659    9335 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.17.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.17.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
	W0526 22:01:14.291084    9335 validation.go:28] Cannot validate kubelet config - no validator is available
	W0526 22:01:14.291241    9335 validation.go:28] Cannot validate kube-proxy config - no validator is available
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0526 22:01:15.797516    9335 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0526 22:01:15.798659    9335 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0526 22:05:16.315954  558359 out.go:235] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0526 22:05:16.316008  558359 out.go:235] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0526 22:05:16.318669  558359 out.go:170] 

                                                
                                                
** /stderr **
version_upgrade_test.go:131: upgrade from v1.6.2 to HEAD failed: out/minikube-linux-amd64 start -p running-upgrade-20210526215018-510955 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: exit status 109
panic.go:613: *** TestRunningBinaryUpgrade FAILED at 2021-05-26 22:05:16.416619889 +0000 UTC m=+5151.534168341
helpers_test.go:218: -----------------------post-mortem--------------------------------
helpers_test.go:235: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-20210526215018-510955 -n running-upgrade-20210526215018-510955
helpers_test.go:235: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-20210526215018-510955 -n running-upgrade-20210526215018-510955: exit status 2 (258.926394ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:235: status error: exit status 2 (may be ok)
helpers_test.go:240: <<< TestRunningBinaryUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:241: ======>  post-mortem[TestRunningBinaryUpgrade]: minikube logs <======
helpers_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p running-upgrade-20210526215018-510955 logs -n 25
helpers_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p running-upgrade-20210526215018-510955 logs -n 25: exit status 110 (650.401241ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------|------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                   Args                   |                 Profile                  |  User   | Version |          Start Time           |           End Time            |
	|---------|------------------------------------------|------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| delete  | -p                                       | force-systemd-flag-20210526215127-510955 | jenkins | v1.20.0 | Wed, 26 May 2021 21:53:32 UTC | Wed, 26 May 2021 21:53:34 UTC |
	|         | force-systemd-flag-20210526215127-510955 |                                          |         |         |                               |                               |
	| start   | -p                                       | kubernetes-upgrade-20210526215256-510955 | jenkins | v1.20.0 | Wed, 26 May 2021 21:52:56 UTC | Wed, 26 May 2021 21:54:07 UTC |
	|         | kubernetes-upgrade-20210526215256-510955 |                                          |         |         |                               |                               |
	|         | --memory=2200                            |                                          |         |         |                               |                               |
	|         | --kubernetes-version=v1.14.0             |                                          |         |         |                               |                               |
	|         | --alsologtostderr -v=1 --driver=kvm2     |                                          |         |         |                               |                               |
	|         | --container-runtime=containerd           |                                          |         |         |                               |                               |
	| stop    | -p                                       | kubernetes-upgrade-20210526215256-510955 | jenkins | v1.20.0 | Wed, 26 May 2021 21:54:07 UTC | Wed, 26 May 2021 21:55:40 UTC |
	|         | kubernetes-upgrade-20210526215256-510955 |                                          |         |         |                               |                               |
	| start   | -p auto-20210526215016-510955            | auto-20210526215016-510955               | jenkins | v1.20.0 | Wed, 26 May 2021 21:53:34 UTC | Wed, 26 May 2021 21:56:21 UTC |
	|         | --memory=2048                            |                                          |         |         |                               |                               |
	|         | --alsologtostderr                        |                                          |         |         |                               |                               |
	|         | --wait=true --wait-timeout=5m            |                                          |         |         |                               |                               |
	|         | --driver=kvm2                            |                                          |         |         |                               |                               |
	|         | --container-runtime=containerd           |                                          |         |         |                               |                               |
	| ssh     | -p auto-20210526215016-510955            | auto-20210526215016-510955               | jenkins | v1.20.0 | Wed, 26 May 2021 21:56:21 UTC | Wed, 26 May 2021 21:56:21 UTC |
	|         | pgrep -a kubelet                         |                                          |         |         |                               |                               |
	| delete  | -p auto-20210526215016-510955            | auto-20210526215016-510955               | jenkins | v1.20.0 | Wed, 26 May 2021 21:56:31 UTC | Wed, 26 May 2021 21:56:32 UTC |
	| start   | -p                                       | kubernetes-upgrade-20210526215256-510955 | jenkins | v1.20.0 | Wed, 26 May 2021 21:55:40 UTC | Wed, 26 May 2021 21:57:12 UTC |
	|         | kubernetes-upgrade-20210526215256-510955 |                                          |         |         |                               |                               |
	|         | --memory=2200                            |                                          |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-alpha.1     |                                          |         |         |                               |                               |
	|         | --alsologtostderr -v=1 --driver=kvm2     |                                          |         |         |                               |                               |
	|         | --container-runtime=containerd           |                                          |         |         |                               |                               |
	| start   | -p                                       | kubernetes-upgrade-20210526215256-510955 | jenkins | v1.20.0 | Wed, 26 May 2021 21:57:12 UTC | Wed, 26 May 2021 21:57:43 UTC |
	|         | kubernetes-upgrade-20210526215256-510955 |                                          |         |         |                               |                               |
	|         | --memory=2200                            |                                          |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-alpha.1     |                                          |         |         |                               |                               |
	|         | --alsologtostderr -v=1 --driver=kvm2     |                                          |         |         |                               |                               |
	|         | --container-runtime=containerd           |                                          |         |         |                               |                               |
	| delete  | -p                                       | kubernetes-upgrade-20210526215256-510955 | jenkins | v1.20.0 | Wed, 26 May 2021 21:57:43 UTC | Wed, 26 May 2021 21:57:45 UTC |
	|         | kubernetes-upgrade-20210526215256-510955 |                                          |         |         |                               |                               |
	| start   | -p                                       | cilium-20210526215017-510955             | jenkins | v1.20.0 | Wed, 26 May 2021 21:56:32 UTC | Wed, 26 May 2021 21:59:17 UTC |
	|         | cilium-20210526215017-510955             |                                          |         |         |                               |                               |
	|         | --memory=2048                            |                                          |         |         |                               |                               |
	|         | --alsologtostderr                        |                                          |         |         |                               |                               |
	|         | --wait=true --wait-timeout=5m            |                                          |         |         |                               |                               |
	|         | --cni=cilium --driver=kvm2               |                                          |         |         |                               |                               |
	|         | --container-runtime=containerd           |                                          |         |         |                               |                               |
	| ssh     | -p                                       | cilium-20210526215017-510955             | jenkins | v1.20.0 | Wed, 26 May 2021 21:59:22 UTC | Wed, 26 May 2021 21:59:22 UTC |
	|         | cilium-20210526215017-510955             |                                          |         |         |                               |                               |
	|         | pgrep -a kubelet                         |                                          |         |         |                               |                               |
	| delete  | -p                                       | cilium-20210526215017-510955             | jenkins | v1.20.0 | Wed, 26 May 2021 21:59:34 UTC | Wed, 26 May 2021 21:59:35 UTC |
	|         | cilium-20210526215017-510955             |                                          |         |         |                               |                               |
	| start   | -p                                       | calico-20210526215017-510955             | jenkins | v1.20.0 | Wed, 26 May 2021 21:57:45 UTC | Wed, 26 May 2021 22:00:23 UTC |
	|         | calico-20210526215017-510955             |                                          |         |         |                               |                               |
	|         | --memory=2048                            |                                          |         |         |                               |                               |
	|         | --alsologtostderr                        |                                          |         |         |                               |                               |
	|         | --wait=true --wait-timeout=5m            |                                          |         |         |                               |                               |
	|         | --cni=calico --driver=kvm2               |                                          |         |         |                               |                               |
	|         | --container-runtime=containerd           |                                          |         |         |                               |                               |
	| ssh     | -p                                       | calico-20210526215017-510955             | jenkins | v1.20.0 | Wed, 26 May 2021 22:00:28 UTC | Wed, 26 May 2021 22:00:29 UTC |
	|         | calico-20210526215017-510955             |                                          |         |         |                               |                               |
	|         | pgrep -a kubelet                         |                                          |         |         |                               |                               |
	| delete  | -p                                       | calico-20210526215017-510955             | jenkins | v1.20.0 | Wed, 26 May 2021 22:00:43 UTC | Wed, 26 May 2021 22:00:44 UTC |
	|         | calico-20210526215017-510955             |                                          |         |         |                               |                               |
	| delete  | -p                                       | stopped-upgrade-20210526214750-510955    | jenkins | v1.20.0 | Wed, 26 May 2021 22:02:03 UTC | Wed, 26 May 2021 22:02:04 UTC |
	|         | stopped-upgrade-20210526214750-510955    |                                          |         |         |                               |                               |
	| start   | -p                                       | custom-weave-20210526215017-510955       | jenkins | v1.20.0 | Wed, 26 May 2021 21:59:35 UTC | Wed, 26 May 2021 22:02:27 UTC |
	|         | custom-weave-20210526215017-510955       |                                          |         |         |                               |                               |
	|         | --memory=2048 --alsologtostderr          |                                          |         |         |                               |                               |
	|         | --wait=true --wait-timeout=5m            |                                          |         |         |                               |                               |
	|         | --cni=testdata/weavenet.yaml             |                                          |         |         |                               |                               |
	|         | --driver=kvm2                            |                                          |         |         |                               |                               |
	|         | --container-runtime=containerd           |                                          |         |         |                               |                               |
	| ssh     | -p                                       | custom-weave-20210526215017-510955       | jenkins | v1.20.0 | Wed, 26 May 2021 22:02:27 UTC | Wed, 26 May 2021 22:02:28 UTC |
	|         | custom-weave-20210526215017-510955       |                                          |         |         |                               |                               |
	|         | pgrep -a kubelet                         |                                          |         |         |                               |                               |
	| delete  | -p                                       | custom-weave-20210526215017-510955       | jenkins | v1.20.0 | Wed, 26 May 2021 22:02:41 UTC | Wed, 26 May 2021 22:02:42 UTC |
	|         | custom-weave-20210526215017-510955       |                                          |         |         |                               |                               |
	| start   | -p                                       | kindnet-20210526215016-510955            | jenkins | v1.20.0 | Wed, 26 May 2021 22:00:44 UTC | Wed, 26 May 2021 22:03:42 UTC |
	|         | kindnet-20210526215016-510955            |                                          |         |         |                               |                               |
	|         | --memory=2048                            |                                          |         |         |                               |                               |
	|         | --alsologtostderr                        |                                          |         |         |                               |                               |
	|         | --wait=true --wait-timeout=5m            |                                          |         |         |                               |                               |
	|         | --cni=kindnet --driver=kvm2              |                                          |         |         |                               |                               |
	|         | --container-runtime=containerd           |                                          |         |         |                               |                               |
	| ssh     | -p                                       | kindnet-20210526215016-510955            | jenkins | v1.20.0 | Wed, 26 May 2021 22:03:47 UTC | Wed, 26 May 2021 22:03:47 UTC |
	|         | kindnet-20210526215016-510955            |                                          |         |         |                               |                               |
	|         | pgrep -a kubelet                         |                                          |         |         |                               |                               |
	| delete  | -p                                       | kindnet-20210526215016-510955            | jenkins | v1.20.0 | Wed, 26 May 2021 22:03:58 UTC | Wed, 26 May 2021 22:03:59 UTC |
	|         | kindnet-20210526215016-510955            |                                          |         |         |                               |                               |
	| start   | -p                                       | flannel-20210526215016-510955            | jenkins | v1.20.0 | Wed, 26 May 2021 22:02:04 UTC | Wed, 26 May 2021 22:04:58 UTC |
	|         | flannel-20210526215016-510955            |                                          |         |         |                               |                               |
	|         | --memory=2048                            |                                          |         |         |                               |                               |
	|         | --alsologtostderr                        |                                          |         |         |                               |                               |
	|         | --wait=true --wait-timeout=5m            |                                          |         |         |                               |                               |
	|         | --cni=flannel --driver=kvm2              |                                          |         |         |                               |                               |
	|         | --container-runtime=containerd           |                                          |         |         |                               |                               |
	| ssh     | -p                                       | flannel-20210526215016-510955            | jenkins | v1.20.0 | Wed, 26 May 2021 22:05:03 UTC | Wed, 26 May 2021 22:05:03 UTC |
	|         | flannel-20210526215016-510955            |                                          |         |         |                               |                               |
	|         | pgrep -a kubelet                         |                                          |         |         |                               |                               |
	| delete  | -p                                       | flannel-20210526215016-510955            | jenkins | v1.20.0 | Wed, 26 May 2021 22:05:14 UTC | Wed, 26 May 2021 22:05:15 UTC |
	|         | flannel-20210526215016-510955            |                                          |         |         |                               |                               |
	|---------|------------------------------------------|------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/05/26 22:05:15
	Running on machine: debian-jenkins-agent-4
	Binary: Built with gc go1.16.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0526 22:05:15.385369  563060 out.go:291] Setting OutFile to fd 1 ...
	I0526 22:05:15.385581  563060 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0526 22:05:15.385594  563060 out.go:304] Setting ErrFile to fd 2...
	I0526 22:05:15.385598  563060 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0526 22:05:15.385723  563060 root.go:316] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/bin
	I0526 22:05:15.386072  563060 out.go:298] Setting JSON to false
	I0526 22:05:15.427447  563060 start.go:110] hostinfo: {"hostname":"debian-jenkins-agent-4","uptime":20878,"bootTime":1622045838,"procs":178,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-15-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0526 22:05:15.427534  563060 start.go:120] virtualization: kvm guest
	I0526 22:05:15.430386  563060 out.go:170] * [old-k8s-version-20210526220515-510955] minikube v1.20.0 on Debian 9.13 (kvm/amd64)
	I0526 22:05:15.431873  563060 out.go:170]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/kubeconfig
	I0526 22:05:15.433234  563060 out.go:170]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0526 22:05:15.434623  563060 out.go:170]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube
	I0526 22:05:15.435956  563060 out.go:170]   - MINIKUBE_LOCATION=11504
	I0526 22:05:15.436595  563060 driver.go:331] Setting default libvirt URI to qemu:///system
	I0526 22:05:15.468977  563060 out.go:170] * Using the kvm2 driver based on user configuration
	I0526 22:05:15.469001  563060 start.go:278] selected driver: kvm2
	I0526 22:05:15.469007  563060 start.go:751] validating driver "kvm2" against <nil>
	I0526 22:05:15.469024  563060 start.go:762] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0526 22:05:15.469687  563060 install.go:51] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0526 22:05:15.469838  563060 install.go:116] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0526 22:05:15.484412  563060 install.go:136] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.20.0
	I0526 22:05:15.484476  563060 start_flags.go:259] no existing cluster config was found, will generate one from the flags 
	I0526 22:05:15.484656  563060 start_flags.go:656] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0526 22:05:15.484684  563060 cni.go:93] Creating CNI manager for ""
	I0526 22:05:15.484693  563060 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	I0526 22:05:15.484701  563060 start_flags.go:268] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0526 22:05:15.484709  563060 start_flags.go:273] config:
	{Name:old-k8s-version-20210526220515-510955 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.22-1620785771-11384@sha256:f5844fe35994179bbad8dda27d4912304a2fedccdf0bf93ce8b2ec2b3b83af1c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.14.0 ClusterName:old-k8s-version-20210526220515-510955 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	I0526 22:05:15.484810  563060 iso.go:123] acquiring lock: {Name:mkae6243686e006cb5174618a31875b12ffbed81 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0526 22:05:15.486797  563060 out.go:170] * Starting control plane node old-k8s-version-20210526220515-510955 in cluster old-k8s-version-20210526220515-510955
	I0526 22:05:15.486823  563060 preload.go:98] Checking if preload exists for k8s version v1.14.0 and runtime containerd
	I0526 22:05:15.486851  563060 preload.go:106] Found local preload: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.14.0-containerd-overlay2-amd64.tar.lz4
	I0526 22:05:15.486872  563060 cache.go:54] Caching tarball of preloaded images
	I0526 22:05:15.486978  563060 preload.go:143] Found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.14.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0526 22:05:15.487001  563060 cache.go:57] Finished verifying existence of preloaded tar for  v1.14.0 on containerd
	I0526 22:05:15.487211  563060 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/old-k8s-version-20210526220515-510955/config.json ...
	I0526 22:05:15.487252  563060 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/old-k8s-version-20210526220515-510955/config.json: {Name:mkf6e921d20289b2e2e3fa2be7b9d6d897221a18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0526 22:05:15.487432  563060 cache.go:191] Successfully downloaded all kic artifacts
	I0526 22:05:15.487454  563060 start.go:313] acquiring machines lock for old-k8s-version-20210526220515-510955: {Name:mk9b6c43d31e9eaa4b66476ed1274ba5b188c66b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0526 22:05:15.487511  563060 start.go:317] acquired machines lock for "old-k8s-version-20210526220515-510955" in 40.362µs
	I0526 22:05:15.487536  563060 start.go:89] Provisioning new machine with config: &{Name:old-k8s-version-20210526220515-510955 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.20.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.22-1620785771-11384@sha256:f5844fe35994179bbad8dda27d4912304a2fedccdf0bf93ce8b2ec2b3b83af1c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.14.0 C
lusterName:old-k8s-version-20210526220515-510955 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false} &{Name: IP: Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}
	I0526 22:05:15.487622  563060 start.go:126] createHost starting for "" (driver="kvm2")
	I0526 22:05:15.276498  562617 out.go:170]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0526 22:05:15.276595  562617 addons.go:268] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0526 22:05:15.276610  562617 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0526 22:05:15.276629  562617 main.go:128] libmachine: (bridge-20210526215016-510955) Calling .GetSSHHostname
	I0526 22:05:15.277611  562617 pod_ready.go:78] waiting up to 5m0s for pod "coredns-74ff55c5b-59vxv" in "kube-system" namespace to be "Ready" ...
	I0526 22:05:15.289262  562617 main.go:128] libmachine: (bridge-20210526215016-510955) Calling .GetSSHPort
	I0526 22:05:15.289410  562617 main.go:128] libmachine: Plugin server listening at address 127.0.0.1:42051
	I0526 22:05:15.289549  562617 main.go:128] libmachine: (bridge-20210526215016-510955) Calling .GetSSHKeyPath
	I0526 22:05:15.289717  562617 main.go:128] libmachine: (bridge-20210526215016-510955) DBG | domain bridge-20210526215016-510955 has defined MAC address 52:54:00:0d:fa:4c in network mk-bridge-20210526215016-510955
	I0526 22:05:15.289778  562617 main.go:128] libmachine: (bridge-20210526215016-510955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:fa:4c", ip: ""} in network mk-bridge-20210526215016-510955: {Iface:virbr3 ExpiryTime:2021-05-26 23:04:14 +0000 UTC Type:0 Mac:52:54:00:0d:fa:4c Iaid: IPaddr:192.168.61.192 Prefix:24 Hostname:bridge-20210526215016-510955 Clientid:01:52:54:00:0d:fa:4c}
	I0526 22:05:15.289803  562617 main.go:128] libmachine: (bridge-20210526215016-510955) DBG | domain bridge-20210526215016-510955 has defined IP address 192.168.61.192 and MAC address 52:54:00:0d:fa:4c in network mk-bridge-20210526215016-510955
	I0526 22:05:15.289847  562617 main.go:128] libmachine: () Calling .GetVersion
	I0526 22:05:15.289852  562617 main.go:128] libmachine: (bridge-20210526215016-510955) Calling .GetSSHUsername
	I0526 22:05:15.290192  562617 main.go:128] libmachine: Plugin server listening at address 127.0.0.1:38713
	I0526 22:05:15.290398  562617 sshutil.go:53] new ssh client: &{IP:192.168.61.192 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/bridge-20210526215016-510955/id_rsa Username:docker}
	I0526 22:05:15.290934  562617 main.go:128] libmachine: Using API Version  1
	I0526 22:05:15.290962  562617 main.go:128] libmachine: () Calling .SetConfigRaw
	I0526 22:05:15.290987  562617 main.go:128] libmachine: () Calling .GetVersion
	I0526 22:05:15.291353  562617 main.go:128] libmachine: () Calling .GetMachineName
	I0526 22:05:15.291531  562617 main.go:128] libmachine: Using API Version  1
	I0526 22:05:15.291552  562617 main.go:128] libmachine: () Calling .SetConfigRaw
	I0526 22:05:15.291736  562617 main.go:128] libmachine: (bridge-20210526215016-510955) Calling .DriverName
	I0526 22:05:15.291899  562617 main.go:128] libmachine: () Calling .GetMachineName
	I0526 22:05:15.291917  562617 ssh_runner.go:149] Run: sudo crictl images --output json
	I0526 22:05:15.291940  562617 main.go:128] libmachine: (bridge-20210526215016-510955) Calling .GetSSHHostname
	I0526 22:05:15.292502  562617 main.go:128] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0526 22:05:15.292548  562617 main.go:128] libmachine: Launching plugin server for driver kvm2
	I0526 22:05:15.298082  562617 main.go:128] libmachine: (bridge-20210526215016-510955) DBG | domain bridge-20210526215016-510955 has defined MAC address 52:54:00:0d:fa:4c in network mk-bridge-20210526215016-510955
	I0526 22:05:15.298500  562617 main.go:128] libmachine: (bridge-20210526215016-510955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:fa:4c", ip: ""} in network mk-bridge-20210526215016-510955: {Iface:virbr3 ExpiryTime:2021-05-26 23:04:14 +0000 UTC Type:0 Mac:52:54:00:0d:fa:4c Iaid: IPaddr:192.168.61.192 Prefix:24 Hostname:bridge-20210526215016-510955 Clientid:01:52:54:00:0d:fa:4c}
	I0526 22:05:15.298526  562617 main.go:128] libmachine: (bridge-20210526215016-510955) DBG | domain bridge-20210526215016-510955 has defined IP address 192.168.61.192 and MAC address 52:54:00:0d:fa:4c in network mk-bridge-20210526215016-510955
	I0526 22:05:15.298666  562617 main.go:128] libmachine: (bridge-20210526215016-510955) Calling .GetSSHPort
	I0526 22:05:15.298820  562617 main.go:128] libmachine: (bridge-20210526215016-510955) Calling .GetSSHKeyPath
	I0526 22:05:15.298967  562617 main.go:128] libmachine: (bridge-20210526215016-510955) Calling .GetSSHUsername
	I0526 22:05:15.299094  562617 sshutil.go:53] new ssh client: &{IP:192.168.61.192 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/bridge-20210526215016-510955/id_rsa Username:docker}
	I0526 22:05:15.305318  562617 main.go:128] libmachine: Plugin server listening at address 127.0.0.1:37229
	I0526 22:05:15.305699  562617 main.go:128] libmachine: () Calling .GetVersion
	I0526 22:05:15.306210  562617 main.go:128] libmachine: Using API Version  1
	I0526 22:05:15.306234  562617 main.go:128] libmachine: () Calling .SetConfigRaw
	I0526 22:05:15.306609  562617 main.go:128] libmachine: () Calling .GetMachineName
	I0526 22:05:15.306801  562617 main.go:128] libmachine: (bridge-20210526215016-510955) Calling .GetState
	I0526 22:05:15.309708  562617 main.go:128] libmachine: (bridge-20210526215016-510955) Calling .DriverName
	I0526 22:05:15.309943  562617 addons.go:268] installing /etc/kubernetes/addons/storageclass.yaml
	I0526 22:05:15.309963  562617 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0526 22:05:15.309981  562617 main.go:128] libmachine: (bridge-20210526215016-510955) Calling .GetSSHHostname
	I0526 22:05:15.315475  562617 main.go:128] libmachine: (bridge-20210526215016-510955) DBG | domain bridge-20210526215016-510955 has defined MAC address 52:54:00:0d:fa:4c in network mk-bridge-20210526215016-510955
	I0526 22:05:15.315928  562617 main.go:128] libmachine: (bridge-20210526215016-510955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:fa:4c", ip: ""} in network mk-bridge-20210526215016-510955: {Iface:virbr3 ExpiryTime:2021-05-26 23:04:14 +0000 UTC Type:0 Mac:52:54:00:0d:fa:4c Iaid: IPaddr:192.168.61.192 Prefix:24 Hostname:bridge-20210526215016-510955 Clientid:01:52:54:00:0d:fa:4c}
	I0526 22:05:15.315954  562617 main.go:128] libmachine: (bridge-20210526215016-510955) DBG | domain bridge-20210526215016-510955 has defined IP address 192.168.61.192 and MAC address 52:54:00:0d:fa:4c in network mk-bridge-20210526215016-510955
	I0526 22:05:15.316185  562617 main.go:128] libmachine: (bridge-20210526215016-510955) Calling .GetSSHPort
	I0526 22:05:15.316461  562617 main.go:128] libmachine: (bridge-20210526215016-510955) Calling .GetSSHKeyPath
	I0526 22:05:15.316614  562617 main.go:128] libmachine: (bridge-20210526215016-510955) Calling .GetSSHUsername
	I0526 22:05:15.316737  562617 sshutil.go:53] new ssh client: &{IP:192.168.61.192 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/bridge-20210526215016-510955/id_rsa Username:docker}
	I0526 22:05:15.397329  562617 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0526 22:05:15.415708  562617 containerd.go:566] couldn't find preloaded image for "docker.io/minikube-local-cache-test:functional-20210526211257-510955". assuming images are not preloaded.
	I0526 22:05:15.415732  562617 cache_images.go:78] LoadImages start: [minikube-local-cache-test:functional-20210526211257-510955]
	I0526 22:05:15.415790  562617 image.go:162] retrieving image: minikube-local-cache-test:functional-20210526211257-510955
	I0526 22:05:15.415810  562617 image.go:168] checking repository: index.docker.io/library/minikube-local-cache-test
	I0526 22:05:15.443528  562617 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	W0526 22:05:15.530997  562617 image.go:175] remote: HEAD https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210526211257-510955: unexpected status code 401 Unauthorized (HEAD responses have no body, use GET for details)
	I0526 22:05:15.531035  562617 image.go:176] short name: minikube-local-cache-test:functional-20210526211257-510955
	I0526 22:05:15.532017  562617 image.go:204] daemon lookup for minikube-local-cache-test:functional-20210526211257-510955: Error response from daemon: reference does not exist
	W0526 22:05:15.578621  562617 image.go:214] authn lookup for minikube-local-cache-test:functional-20210526211257-510955 (trying anon): GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210526211257-510955: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]]
	I0526 22:05:15.622445  562617 image.go:218] remote lookup for minikube-local-cache-test:functional-20210526211257-510955: GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210526211257-510955: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]]
	I0526 22:05:15.622497  562617 image.go:95] error retrieve Image minikube-local-cache-test:functional-20210526211257-510955 ref GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210526211257-510955: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]] 
	I0526 22:05:15.622528  562617 cache_images.go:106] "minikube-local-cache-test:functional-20210526211257-510955" needs transfer: got empty img digest "" for minikube-local-cache-test:functional-20210526211257-510955
	I0526 22:05:15.622564  562617 cri.go:205] Removing image: minikube-local-cache-test:functional-20210526211257-510955
	I0526 22:05:15.622607  562617 ssh_runner.go:149] Run: which crictl
	I0526 22:05:15.830682  562617 main.go:128] libmachine: Making call to close driver server
	I0526 22:05:15.830710  562617 main.go:128] libmachine: (bridge-20210526215016-510955) Calling .Close
	I0526 22:05:15.831086  562617 main.go:128] libmachine: Successfully made call to close driver server
	I0526 22:05:15.831105  562617 main.go:128] libmachine: Making call to close connection to plugin binary
	I0526 22:05:15.831114  562617 main.go:128] libmachine: Making call to close driver server
	I0526 22:05:15.831124  562617 main.go:128] libmachine: (bridge-20210526215016-510955) Calling .Close
	I0526 22:05:15.831128  562617 main.go:128] libmachine: (bridge-20210526215016-510955) DBG | Closing plugin on server side
	I0526 22:05:15.831417  562617 main.go:128] libmachine: Successfully made call to close driver server
	I0526 22:05:15.831431  562617 main.go:128] libmachine: Making call to close connection to plugin binary
	I0526 22:05:15.859954  562617 main.go:128] libmachine: Making call to close driver server
	I0526 22:05:15.859986  562617 main.go:128] libmachine: (bridge-20210526215016-510955) Calling .Close
	I0526 22:05:15.860005  562617 ssh_runner.go:149] Run: sudo /bin/crictl rmi minikube-local-cache-test:functional-20210526211257-510955
	I0526 22:05:15.860235  562617 main.go:128] libmachine: Successfully made call to close driver server
	I0526 22:05:15.860252  562617 main.go:128] libmachine: Making call to close connection to plugin binary
	I0526 22:05:15.860260  562617 main.go:128] libmachine: Making call to close driver server
	I0526 22:05:15.860269  562617 main.go:128] libmachine: (bridge-20210526215016-510955) Calling .Close
	I0526 22:05:15.860506  562617 main.go:128] libmachine: (bridge-20210526215016-510955) DBG | Closing plugin on server side
	I0526 22:05:15.860569  562617 main.go:128] libmachine: Successfully made call to close driver server
	I0526 22:05:15.860587  562617 main.go:128] libmachine: Making call to close connection to plugin binary
	I0526 22:05:15.860604  562617 main.go:128] libmachine: Making call to close driver server
	I0526 22:05:15.860614  562617 main.go:128] libmachine: (bridge-20210526215016-510955) Calling .Close
	I0526 22:05:15.862040  562617 main.go:128] libmachine: Successfully made call to close driver server
	I0526 22:05:15.862064  562617 main.go:128] libmachine: Making call to close connection to plugin binary
	I0526 22:05:15.862048  562617 main.go:128] libmachine: (bridge-20210526215016-510955) DBG | Closing plugin on server side
	I0526 22:05:15.805943  558359 kubeadm.go:392] StartCluster complete in 12m29.229372005s
	I0526 22:05:15.805997  558359 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0526 22:05:15.806043  558359 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0526 22:05:15.832878  558359 cri.go:76] found id: ""
	I0526 22:05:15.832899  558359 logs.go:270] 0 containers: []
	W0526 22:05:15.832907  558359 logs.go:272] No container was found matching "kube-apiserver"
	I0526 22:05:15.832915  558359 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0526 22:05:15.832977  558359 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=etcd
	I0526 22:05:15.859276  558359 cri.go:76] found id: ""
	I0526 22:05:15.859300  558359 logs.go:270] 0 containers: []
	W0526 22:05:15.859307  558359 logs.go:272] No container was found matching "etcd"
	I0526 22:05:15.859318  558359 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0526 22:05:15.859388  558359 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=coredns
	I0526 22:05:15.885218  558359 cri.go:76] found id: ""
	I0526 22:05:15.885240  558359 logs.go:270] 0 containers: []
	W0526 22:05:15.885248  558359 logs.go:272] No container was found matching "coredns"
	I0526 22:05:15.885257  558359 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0526 22:05:15.885327  558359 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0526 22:05:15.909003  558359 cri.go:76] found id: ""
	I0526 22:05:15.909024  558359 logs.go:270] 0 containers: []
	W0526 22:05:15.909031  558359 logs.go:272] No container was found matching "kube-scheduler"
	I0526 22:05:15.909038  558359 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0526 22:05:15.909087  558359 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0526 22:05:15.929574  558359 cri.go:76] found id: ""
	I0526 22:05:15.929596  558359 logs.go:270] 0 containers: []
	W0526 22:05:15.929603  558359 logs.go:272] No container was found matching "kube-proxy"
	I0526 22:05:15.929612  558359 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0526 22:05:15.929670  558359 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0526 22:05:15.955184  558359 cri.go:76] found id: ""
	I0526 22:05:15.955207  558359 logs.go:270] 0 containers: []
	W0526 22:05:15.955213  558359 logs.go:272] No container was found matching "kubernetes-dashboard"
	I0526 22:05:15.955219  558359 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0526 22:05:15.955272  558359 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0526 22:05:15.975887  558359 cri.go:76] found id: ""
	I0526 22:05:15.975920  558359 logs.go:270] 0 containers: []
	W0526 22:05:15.975930  558359 logs.go:272] No container was found matching "storage-provisioner"
	I0526 22:05:15.975940  558359 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0526 22:05:15.976005  558359 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0526 22:05:15.996208  558359 cri.go:76] found id: ""
	I0526 22:05:15.996229  558359 logs.go:270] 0 containers: []
	W0526 22:05:15.996237  558359 logs.go:272] No container was found matching "kube-controller-manager"
	I0526 22:05:15.996251  558359 logs.go:123] Gathering logs for kubelet ...
	I0526 22:05:15.996270  558359 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0526 22:05:16.076587  558359 logs.go:123] Gathering logs for dmesg ...
	I0526 22:05:16.076623  558359 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0526 22:05:16.090634  558359 logs.go:123] Gathering logs for describe nodes ...
	I0526 22:05:16.090663  558359 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0526 22:05:16.188309  558359 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0526 22:05:16.188340  558359 logs.go:123] Gathering logs for containerd ...
	I0526 22:05:16.188359  558359 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0526 22:05:16.278354  558359 logs.go:123] Gathering logs for container status ...
	I0526 22:05:16.278389  558359 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0526 22:05:16.309975  558359 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.17.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.17.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
	W0526 22:01:14.291084    9335 validation.go:28] Cannot validate kubelet config - no validator is available
	W0526 22:01:14.291241    9335 validation.go:28] Cannot validate kube-proxy config - no validator is available
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0526 22:01:15.797516    9335 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0526 22:01:15.798659    9335 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0526 22:05:16.310012  558359 out.go:235] * 
	W0526 22:05:16.310220  558359 out.go:235] X Error starting cluster: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.17.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.17.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
	W0526 22:01:14.291084    9335 validation.go:28] Cannot validate kubelet config - no validator is available
	W0526 22:01:14.291241    9335 validation.go:28] Cannot validate kube-proxy config - no validator is available
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0526 22:01:15.797516    9335 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0526 22:01:15.798659    9335 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0526 22:05:16.310248  558359 out.go:235] * 
	W0526 22:05:16.312189  558359 out.go:235] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	W0526 22:05:16.312204  558359 out.go:235] │                                                                                                                                                             │
	W0526 22:05:16.312213  558359 out.go:235] │    * If the above advice does not help, please let us know:                                                                                                 │
	W0526 22:05:16.312219  558359 out.go:235] │      https://github.com/kubernetes/minikube/issues/new/choose                                                                                               │
	W0526 22:05:16.312225  558359 out.go:235] │                                                                                                                                                             │
	W0526 22:05:16.312238  558359 out.go:235] │    * Please attach the following file to the GitHub issue:                                                                                                  │
	W0526 22:05:16.312248  558359 out.go:235] │    * - /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/logs/lastStart.txt    │
	W0526 22:05:16.312256  558359 out.go:235] │                                                                                                                                                             │
	W0526 22:05:16.312267  558359 out.go:235] ╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	W0526 22:05:16.312277  558359 out.go:235] 
	I0526 22:05:16.315625  558359 out.go:170] 
	W0526 22:05:16.315854  558359 out.go:235] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.17.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.17.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
	W0526 22:01:14.291084    9335 validation.go:28] Cannot validate kubelet config - no validator is available
	W0526 22:01:14.291241    9335 validation.go:28] Cannot validate kube-proxy config - no validator is available
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0526 22:01:15.797516    9335 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0526 22:01:15.798659    9335 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0526 22:05:16.315954  558359 out.go:235] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0526 22:05:16.316008  558359 out.go:235] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID
	
	* 
	* ==> containerd <==
	* -- Logs begin at Wed 2021-05-26 21:50:47 UTC, end at Wed 2021-05-26 22:05:17 UTC. --
	May 26 22:04:37 running-upgrade-20210526215018-510955 containerd[3723]: time="2021-05-26T22:04:37.702098663Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-running-upgrade-20210526215018-510955,Uid:603b914543a305bf066dc8de01ce2232,Namespace:kube-system,Attempt:0,} failed, error" error="failed to create containerd task: runtime "io.containerd.runc.v2" binary not installed "containerd-shim-runc-v2": file does not exist: unknown"
	May 26 22:04:38 running-upgrade-20210526215018-510955 containerd[3723]: time="2021-05-26T22:04:38.673710610Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:etcd-running-upgrade-20210526215018-510955,Uid:6d51c1650fdc83c9c172441c88de0c8b,Namespace:kube-system,Attempt:0,}"
	May 26 22:04:38 running-upgrade-20210526215018-510955 containerd[3723]: time="2021-05-26T22:04:38.674583886Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:kube-scheduler-running-upgrade-20210526215018-510955,Uid:bb577061a17ad23cfbbf52e9419bf32a,Namespace:kube-system,Attempt:0,}"
	May 26 22:04:38 running-upgrade-20210526215018-510955 containerd[3723]: time="2021-05-26T22:04:38.711755490Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:etcd-running-upgrade-20210526215018-510955,Uid:6d51c1650fdc83c9c172441c88de0c8b,Namespace:kube-system,Attempt:0,} failed, error" error="failed to create containerd task: runtime "io.containerd.runc.v2" binary not installed "containerd-shim-runc-v2": file does not exist: unknown"
	May 26 22:04:38 running-upgrade-20210526215018-510955 containerd[3723]: time="2021-05-26T22:04:38.724924520Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-running-upgrade-20210526215018-510955,Uid:bb577061a17ad23cfbbf52e9419bf32a,Namespace:kube-system,Attempt:0,} failed, error" error="failed to create containerd task: runtime "io.containerd.runc.v2" binary not installed "containerd-shim-runc-v2": file does not exist: unknown"
	May 26 22:04:39 running-upgrade-20210526215018-510955 containerd[3723]: time="2021-05-26T22:04:39.672533080Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:kube-apiserver-running-upgrade-20210526215018-510955,Uid:52d3bd95fc93c917a5a0edc6e16385fc,Namespace:kube-system,Attempt:0,}"
	May 26 22:04:39 running-upgrade-20210526215018-510955 containerd[3723]: time="2021-05-26T22:04:39.700928179Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-running-upgrade-20210526215018-510955,Uid:52d3bd95fc93c917a5a0edc6e16385fc,Namespace:kube-system,Attempt:0,} failed, error" error="failed to create containerd task: runtime "io.containerd.runc.v2" binary not installed "containerd-shim-runc-v2": file does not exist: unknown"
	May 26 22:04:51 running-upgrade-20210526215018-510955 containerd[3723]: time="2021-05-26T22:04:51.673387188Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:kube-apiserver-running-upgrade-20210526215018-510955,Uid:52d3bd95fc93c917a5a0edc6e16385fc,Namespace:kube-system,Attempt:0,}"
	May 26 22:04:51 running-upgrade-20210526215018-510955 containerd[3723]: time="2021-05-26T22:04:51.699646530Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-running-upgrade-20210526215018-510955,Uid:52d3bd95fc93c917a5a0edc6e16385fc,Namespace:kube-system,Attempt:0,} failed, error" error="failed to create containerd task: runtime "io.containerd.runc.v2" binary not installed "containerd-shim-runc-v2": file does not exist: unknown"
	May 26 22:04:52 running-upgrade-20210526215018-510955 containerd[3723]: time="2021-05-26T22:04:52.673179483Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:etcd-running-upgrade-20210526215018-510955,Uid:6d51c1650fdc83c9c172441c88de0c8b,Namespace:kube-system,Attempt:0,}"
	May 26 22:04:52 running-upgrade-20210526215018-510955 containerd[3723]: time="2021-05-26T22:04:52.673987983Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:kube-scheduler-running-upgrade-20210526215018-510955,Uid:bb577061a17ad23cfbbf52e9419bf32a,Namespace:kube-system,Attempt:0,}"
	May 26 22:04:52 running-upgrade-20210526215018-510955 containerd[3723]: time="2021-05-26T22:04:52.674701408Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:kube-controller-manager-running-upgrade-20210526215018-510955,Uid:603b914543a305bf066dc8de01ce2232,Namespace:kube-system,Attempt:0,}"
	May 26 22:04:52 running-upgrade-20210526215018-510955 containerd[3723]: time="2021-05-26T22:04:52.731326413Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:etcd-running-upgrade-20210526215018-510955,Uid:6d51c1650fdc83c9c172441c88de0c8b,Namespace:kube-system,Attempt:0,} failed, error" error="failed to create containerd task: runtime "io.containerd.runc.v2" binary not installed "containerd-shim-runc-v2": file does not exist: unknown"
	May 26 22:04:52 running-upgrade-20210526215018-510955 containerd[3723]: time="2021-05-26T22:04:52.739197998Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-running-upgrade-20210526215018-510955,Uid:bb577061a17ad23cfbbf52e9419bf32a,Namespace:kube-system,Attempt:0,} failed, error" error="failed to create containerd task: runtime "io.containerd.runc.v2" binary not installed "containerd-shim-runc-v2": file does not exist: unknown"
	May 26 22:04:52 running-upgrade-20210526215018-510955 containerd[3723]: time="2021-05-26T22:04:52.741878488Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-running-upgrade-20210526215018-510955,Uid:603b914543a305bf066dc8de01ce2232,Namespace:kube-system,Attempt:0,} failed, error" error="failed to create containerd task: runtime "io.containerd.runc.v2" binary not installed "containerd-shim-runc-v2": file does not exist: unknown"
	May 26 22:05:03 running-upgrade-20210526215018-510955 containerd[3723]: time="2021-05-26T22:05:03.770775004Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:etcd-running-upgrade-20210526215018-510955,Uid:6d51c1650fdc83c9c172441c88de0c8b,Namespace:kube-system,Attempt:0,}"
	May 26 22:05:03 running-upgrade-20210526215018-510955 containerd[3723]: time="2021-05-26T22:05:03.775058506Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:kube-controller-manager-running-upgrade-20210526215018-510955,Uid:603b914543a305bf066dc8de01ce2232,Namespace:kube-system,Attempt:0,}"
	May 26 22:05:03 running-upgrade-20210526215018-510955 containerd[3723]: time="2021-05-26T22:05:03.775375682Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:kube-apiserver-running-upgrade-20210526215018-510955,Uid:52d3bd95fc93c917a5a0edc6e16385fc,Namespace:kube-system,Attempt:0,}"
	May 26 22:05:03 running-upgrade-20210526215018-510955 containerd[3723]: time="2021-05-26T22:05:03.778220193Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:kube-scheduler-running-upgrade-20210526215018-510955,Uid:bb577061a17ad23cfbbf52e9419bf32a,Namespace:kube-system,Attempt:0,}"
	May 26 22:05:03 running-upgrade-20210526215018-510955 containerd[3723]: time="2021-05-26T22:05:03.829769291Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:etcd-running-upgrade-20210526215018-510955,Uid:6d51c1650fdc83c9c172441c88de0c8b,Namespace:kube-system,Attempt:0,} failed, error" error="failed to create containerd task: runtime "io.containerd.runc.v2" binary not installed "containerd-shim-runc-v2": file does not exist: unknown"
	May 26 22:05:03 running-upgrade-20210526215018-510955 containerd[3723]: time="2021-05-26T22:05:03.864503471Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-running-upgrade-20210526215018-510955,Uid:603b914543a305bf066dc8de01ce2232,Namespace:kube-system,Attempt:0,} failed, error" error="failed to create containerd task: runtime "io.containerd.runc.v2" binary not installed "containerd-shim-runc-v2": file does not exist: unknown"
	May 26 22:05:03 running-upgrade-20210526215018-510955 containerd[3723]: time="2021-05-26T22:05:03.875664433Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-running-upgrade-20210526215018-510955,Uid:bb577061a17ad23cfbbf52e9419bf32a,Namespace:kube-system,Attempt:0,} failed, error" error="failed to create containerd task: runtime "io.containerd.runc.v2" binary not installed "containerd-shim-runc-v2": file does not exist: unknown"
	May 26 22:05:03 running-upgrade-20210526215018-510955 containerd[3723]: time="2021-05-26T22:05:03.879525345Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-running-upgrade-20210526215018-510955,Uid:52d3bd95fc93c917a5a0edc6e16385fc,Namespace:kube-system,Attempt:0,} failed, error" error="failed to create containerd task: runtime "io.containerd.runc.v2" binary not installed "containerd-shim-runc-v2": file does not exist: unknown"
	May 26 22:05:16 running-upgrade-20210526215018-510955 containerd[3723]: time="2021-05-26T22:05:16.368488038Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:kube-scheduler-running-upgrade-20210526215018-510955,Uid:bb577061a17ad23cfbbf52e9419bf32a,Namespace:kube-system,Attempt:0,}"
	May 26 22:05:16 running-upgrade-20210526215018-510955 containerd[3723]: time="2021-05-26T22:05:16.406577980Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-running-upgrade-20210526215018-510955,Uid:bb577061a17ad23cfbbf52e9419bf32a,Namespace:kube-system,Attempt:0,} failed, error" error="failed to create containerd task: runtime "io.containerd.runc.v2" binary not installed "containerd-shim-runc-v2": file does not exist: unknown"
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [  +0.083688] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[ +18.592694] Unstable clock detected, switching default tracing clock to "global"
	              If you want to keep using the local clock, then add:
	                "trace_clock=local"
	              on the kernel command line
	[  +0.000018] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.709976] systemd[1]: Failed to bump fs.file-max, ignoring: Invalid argument
	[  +0.011221] systemd-fstab-generator[1143]: Ignoring "noauto" for root device
	[  +0.001475] systemd[1]: File /usr/lib/systemd/system/systemd-journald.service:12 configures an IP firewall (IPAddressDeny=any), but the local system does not support BPF/cgroup based firewalling.
	[  +0.000002] systemd[1]: Proceeding WITHOUT firewalling in effect! (This warning is only shown for the first loaded unit using IP firewalling.)
	[  +1.843676] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack.
	[  +0.047826] vboxguest: loading out-of-tree module taints kernel.
	[  +0.009624] vboxguest: PCI device not found, probably running on physical hardware.
	[  +8.258124] systemd-fstab-generator[1984]: Ignoring "noauto" for root device
	[May26 21:51] systemd-fstab-generator[2428]: Ignoring "noauto" for root device
	[  +8.386378] systemd-fstab-generator[2722]: Ignoring "noauto" for root device
	[May26 21:52] systemd-fstab-generator[3465]: Ignoring "noauto" for root device
	[  +4.628014] systemd-fstab-generator[3714]: Ignoring "noauto" for root device
	[  +8.864724] kauditd_printk_skb: 29 callbacks suppressed
	[  +8.136599] kauditd_printk_skb: 20 callbacks suppressed
	[ +11.903693] kauditd_printk_skb: 14 callbacks suppressed
	[ +10.344956] NFSD: Unable to end grace period: -110
	[  +7.695385] systemd-fstab-generator[5220]: Ignoring "noauto" for root device
	[May26 21:57] systemd-fstab-generator[9059]: Ignoring "noauto" for root device
	[May26 22:01] systemd-fstab-generator[9418]: Ignoring "noauto" for root device
	
	* 
	* ==> kernel <==
	*  22:05:17 up 14 min,  0 users,  load average: 0.13, 0.36, 0.40
	Linux running-upgrade-20210526215018-510955 4.19.81 #1 SMP Tue Dec 10 16:09:50 PST 2019 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2019.02.7"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Wed 2021-05-26 21:50:47 UTC, end at Wed 2021-05-26 22:05:17 UTC. --
	May 26 22:05:15 running-upgrade-20210526215018-510955 kubelet[9640]: E0526 22:05:15.977099    9640 kubelet.go:2263] node "running-upgrade-20210526215018-510955" not found
	May 26 22:05:16 running-upgrade-20210526215018-510955 kubelet[9640]: E0526 22:05:16.062963    9640 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://control-plane.minikube.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%!D(MISSING)running-upgrade-20210526215018-510955&limit=500&resourceVersion=0: dial tcp 192.168.50.63:8443: connect: connection refused
	May 26 22:05:16 running-upgrade-20210526215018-510955 kubelet[9640]: E0526 22:05:16.077471    9640 kubelet.go:2263] node "running-upgrade-20210526215018-510955" not found
	May 26 22:05:16 running-upgrade-20210526215018-510955 kubelet[9640]: E0526 22:05:16.178021    9640 kubelet.go:2263] node "running-upgrade-20210526215018-510955" not found
	May 26 22:05:16 running-upgrade-20210526215018-510955 kubelet[9640]: E0526 22:05:16.264772    9640 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.CSIDriver: Get https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1beta1/csidrivers?limit=500&resourceVersion=0: dial tcp 192.168.50.63:8443: connect: connection refused
	May 26 22:05:16 running-upgrade-20210526215018-510955 kubelet[9640]: E0526 22:05:16.278643    9640 kubelet.go:2263] node "running-upgrade-20210526215018-510955" not found
	May 26 22:05:16 running-upgrade-20210526215018-510955 kubelet[9640]: E0526 22:05:16.305398    9640 event.go:272] Unable to write event: 'Post https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events: dial tcp 192.168.50.63:8443: connect: connection refused' (may retry after sleeping)
	May 26 22:05:16 running-upgrade-20210526215018-510955 kubelet[9640]: I0526 22:05:16.365007    9640 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach
	May 26 22:05:16 running-upgrade-20210526215018-510955 kubelet[9640]: E0526 22:05:16.379397    9640 kubelet.go:2263] node "running-upgrade-20210526215018-510955" not found
	May 26 22:05:16 running-upgrade-20210526215018-510955 kubelet[9640]: E0526 22:05:16.407184    9640 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to create containerd task: runtime "io.containerd.runc.v2" binary not installed "containerd-shim-runc-v2": file does not exist: unknown
	May 26 22:05:16 running-upgrade-20210526215018-510955 kubelet[9640]: E0526 22:05:16.407250    9640 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "kube-scheduler-running-upgrade-20210526215018-510955_kube-system(bb577061a17ad23cfbbf52e9419bf32a)" failed: rpc error: code = Unknown desc = failed to create containerd task: runtime "io.containerd.runc.v2" binary not installed "containerd-shim-runc-v2": file does not exist: unknown
	May 26 22:05:16 running-upgrade-20210526215018-510955 kubelet[9640]: E0526 22:05:16.407271    9640 kuberuntime_manager.go:729] createPodSandbox for pod "kube-scheduler-running-upgrade-20210526215018-510955_kube-system(bb577061a17ad23cfbbf52e9419bf32a)" failed: rpc error: code = Unknown desc = failed to create containerd task: runtime "io.containerd.runc.v2" binary not installed "containerd-shim-runc-v2": file does not exist: unknown
	May 26 22:05:16 running-upgrade-20210526215018-510955 kubelet[9640]: E0526 22:05:16.407350    9640 pod_workers.go:191] Error syncing pod bb577061a17ad23cfbbf52e9419bf32a ("kube-scheduler-running-upgrade-20210526215018-510955_kube-system(bb577061a17ad23cfbbf52e9419bf32a)"), skipping: failed to "CreatePodSandbox" for "kube-scheduler-running-upgrade-20210526215018-510955_kube-system(bb577061a17ad23cfbbf52e9419bf32a)" with CreatePodSandboxError: "CreatePodSandbox for pod \"kube-scheduler-running-upgrade-20210526215018-510955_kube-system(bb577061a17ad23cfbbf52e9419bf32a)\" failed: rpc error: code = Unknown desc = failed to create containerd task: runtime \"io.containerd.runc.v2\" binary not installed \"containerd-shim-runc-v2\": file does not exist: unknown"
	May 26 22:05:16 running-upgrade-20210526215018-510955 kubelet[9640]: E0526 22:05:16.464222    9640 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.RuntimeClass: Get https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1beta1/runtimeclasses?limit=500&resourceVersion=0: dial tcp 192.168.50.63:8443: connect: connection refused
	May 26 22:05:16 running-upgrade-20210526215018-510955 kubelet[9640]: E0526 22:05:16.480713    9640 kubelet.go:2263] node "running-upgrade-20210526215018-510955" not found
	May 26 22:05:16 running-upgrade-20210526215018-510955 kubelet[9640]: E0526 22:05:16.581104    9640 kubelet.go:2263] node "running-upgrade-20210526215018-510955" not found
	May 26 22:05:16 running-upgrade-20210526215018-510955 kubelet[9640]: E0526 22:05:16.682158    9640 kubelet.go:2263] node "running-upgrade-20210526215018-510955" not found
	May 26 22:05:16 running-upgrade-20210526215018-510955 kubelet[9640]: E0526 22:05:16.782748    9640 kubelet.go:2263] node "running-upgrade-20210526215018-510955" not found
	May 26 22:05:16 running-upgrade-20210526215018-510955 kubelet[9640]: E0526 22:05:16.883555    9640 kubelet.go:2263] node "running-upgrade-20210526215018-510955" not found
	May 26 22:05:16 running-upgrade-20210526215018-510955 kubelet[9640]: E0526 22:05:16.984699    9640 kubelet.go:2263] node "running-upgrade-20210526215018-510955" not found
	May 26 22:05:17 running-upgrade-20210526215018-510955 kubelet[9640]: E0526 22:05:17.060177    9640 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:458: Failed to list *v1.Node: Get https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)running-upgrade-20210526215018-510955&limit=500&resourceVersion=0: dial tcp 192.168.50.63:8443: connect: connection refused
	May 26 22:05:17 running-upgrade-20210526215018-510955 kubelet[9640]: E0526 22:05:17.085180    9640 kubelet.go:2263] node "running-upgrade-20210526215018-510955" not found
	May 26 22:05:17 running-upgrade-20210526215018-510955 kubelet[9640]: E0526 22:05:17.185747    9640 kubelet.go:2263] node "running-upgrade-20210526215018-510955" not found
	May 26 22:05:17 running-upgrade-20210526215018-510955 kubelet[9640]: E0526 22:05:17.260252    9640 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:449: Failed to list *v1.Service: Get https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.50.63:8443: connect: connection refused
	May 26 22:05:17 running-upgrade-20210526215018-510955 kubelet[9640]: E0526 22:05:17.286422    9640 kubelet.go:2263] node "running-upgrade-20210526215018-510955" not found
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0526 22:05:17.273274  563253 logs.go:190] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:245: failed logs error: exit status 110
helpers_test.go:171: Cleaning up "running-upgrade-20210526215018-510955" profile ...
helpers_test.go:174: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-20210526215018-510955
helpers_test.go:174: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-20210526215018-510955: (1.165870542s)
--- FAIL: TestRunningBinaryUpgrade (899.60s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade (853.38s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade
=== PAUSE TestStoppedBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade
version_upgrade_test.go:189: (dbg) Run:  /tmp/minikube-v1.0.0.084713600.exe start -p stopped-upgrade-20210526214750-510955 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd
E0526 21:49:19.839659  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/addons-20210526204012-510955/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade
version_upgrade_test.go:189: (dbg) Done: /tmp/minikube-v1.0.0.084713600.exe start -p stopped-upgrade-20210526214750-510955 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd: (3m33.533300497s)
version_upgrade_test.go:198: (dbg) Run:  /tmp/minikube-v1.0.0.084713600.exe -p stopped-upgrade-20210526214750-510955 stop

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade
version_upgrade_test.go:198: (dbg) Done: /tmp/minikube-v1.0.0.084713600.exe -p stopped-upgrade-20210526214750-510955 stop: (1m32.738409811s)
version_upgrade_test.go:204: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-20210526214750-510955 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade
version_upgrade_test.go:204: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p stopped-upgrade-20210526214750-510955 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: exit status 109 (9m5.246578695s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-20210526214750-510955] minikube v1.20.0 on Debian 9.13 (kvm/amd64)
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/kubeconfig
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube
	  - MINIKUBE_LOCATION=11504
	* Kubernetes 1.20.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.20.2
	* Using the kvm2 driver based on existing profile
	* Starting control plane node stopped-upgrade-20210526214750-510955 in cluster stopped-upgrade-20210526214750-510955
	* Restarting existing  VM for "stopped-upgrade-20210526214750-510955" ...
	* Preparing Kubernetes v1.14.0 on containerd 1.2.0 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0526 21:52:57.623163  558718 out.go:291] Setting OutFile to fd 1 ...
	I0526 21:52:57.623306  558718 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0526 21:52:57.623318  558718 out.go:304] Setting ErrFile to fd 2...
	I0526 21:52:57.623323  558718 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0526 21:52:57.623480  558718 root.go:316] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/bin
	I0526 21:52:57.623786  558718 out.go:298] Setting JSON to false
	I0526 21:52:57.659498  558718 start.go:110] hostinfo: {"hostname":"debian-jenkins-agent-4","uptime":20140,"bootTime":1622045838,"procs":171,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-15-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0526 21:52:57.659604  558718 start.go:120] virtualization: kvm guest
	I0526 21:52:57.662287  558718 out.go:170] * [stopped-upgrade-20210526214750-510955] minikube v1.20.0 on Debian 9.13 (kvm/amd64)
	I0526 21:52:57.664136  558718 out.go:170]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/kubeconfig
	I0526 21:52:57.665764  558718 out.go:170]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0526 21:52:57.667330  558718 out.go:170]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube
	I0526 21:52:57.668831  558718 out.go:170]   - MINIKUBE_LOCATION=11504
	I0526 21:52:57.669153  558718 start_flags.go:478] config upgrade: Name=stopped-upgrade-20210526214750-510955
	I0526 21:52:57.669167  558718 start_flags.go:485] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase-builds:v0.0.22-1620785771-11384@sha256:f5844fe35994179bbad8dda27d4912304a2fedccdf0bf93ce8b2ec2b3b83af1c
	I0526 21:52:57.669171  558718 start_flags.go:489] Existing config file was missing cpu. (could be an old minikube config), will use the default value
	I0526 21:52:57.669178  558718 start_flags.go:494] Existing config file was missing memory. (could be an old minikube config), will use the default value
	I0526 21:52:57.669659  558718 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/stopped-upgrade-20210526214750-510955/config.json ...
	I0526 21:52:57.670191  558718 main.go:128] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0526 21:52:57.670287  558718 main.go:128] libmachine: Launching plugin server for driver kvm2
	I0526 21:52:57.681406  558718 main.go:128] libmachine: Plugin server listening at address 127.0.0.1:38419
	I0526 21:52:57.681916  558718 main.go:128] libmachine: () Calling .GetVersion
	I0526 21:52:57.682546  558718 main.go:128] libmachine: Using API Version  1
	I0526 21:52:57.682566  558718 main.go:128] libmachine: () Calling .SetConfigRaw
	I0526 21:52:57.682926  558718 main.go:128] libmachine: () Calling .GetMachineName
	I0526 21:52:57.683128  558718 main.go:128] libmachine: (stopped-upgrade-20210526214750-510955) Calling .DriverName
	I0526 21:52:57.685194  558718 out.go:170] * Kubernetes 1.20.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.20.2
	I0526 21:52:57.685239  558718 driver.go:331] Setting default libvirt URI to qemu:///system
	I0526 21:52:57.685527  558718 main.go:128] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0526 21:52:57.685570  558718 main.go:128] libmachine: Launching plugin server for driver kvm2
	I0526 21:52:57.695599  558718 main.go:128] libmachine: Plugin server listening at address 127.0.0.1:45963
	I0526 21:52:57.695980  558718 main.go:128] libmachine: () Calling .GetVersion
	I0526 21:52:57.696423  558718 main.go:128] libmachine: Using API Version  1
	I0526 21:52:57.696459  558718 main.go:128] libmachine: () Calling .SetConfigRaw
	I0526 21:52:57.696757  558718 main.go:128] libmachine: () Calling .GetMachineName
	I0526 21:52:57.696974  558718 main.go:128] libmachine: (stopped-upgrade-20210526214750-510955) Calling .DriverName
	I0526 21:52:57.729161  558718 out.go:170] * Using the kvm2 driver based on existing profile
	I0526 21:52:57.729183  558718 start.go:278] selected driver: kvm2
	I0526 21:52:57.729190  558718 start.go:751] validating driver "kvm2" against &{Name:stopped-upgrade-20210526214750-510955 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.22-1620785771-11384@sha256:f5844fe35994179bbad8dda27d4912304a2fedccdf0bf93ce8b2ec2b3b83af1c Memory:2200 CPUs:2 DiskSize:0 VMDriver: Driver: HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR: HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork: KVMQemuURI: KVMGPU:false KVMHidden:false KVMNUMACount:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot: UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:false HostOnlyNicType: NatNicType: SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.14.0 ClusterName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.50.39 Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	I0526 21:52:57.729306  558718 start.go:762] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0526 21:52:57.729377  558718 install.go:51] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0526 21:52:57.729503  558718 install.go:116] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0526 21:52:57.741186  558718 install.go:136] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.20.0
	I0526 21:52:57.741279  558718 cni.go:93] Creating CNI manager for ""
	I0526 21:52:57.741294  558718 cni.go:142] EnableDefaultCNI is true, recommending bridge
	I0526 21:52:57.741300  558718 start_flags.go:273] config:
	{Name:stopped-upgrade-20210526214750-510955 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.22-1620785771-11384@sha256:f5844fe35994179bbad8dda27d4912304a2fedccdf0bf93ce8b2ec2b3b83af1c Memory:2200 CPUs:2 DiskSize:0 VMDriver: Driver: HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR: HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork: KVMQemuURI: KVMGPU:false KVMHidden:false KVMNUMACount:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot: UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:false HostOnlyNicType: NatNicType: SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.14.0 ClusterName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageReposito
ry: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.50.39 Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	I0526 21:52:57.741391  558718 iso.go:123] acquiring lock: {Name:mkae6243686e006cb5174618a31875b12ffbed81 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0526 21:52:57.743493  558718 out.go:170] * Starting control plane node stopped-upgrade-20210526214750-510955 in cluster stopped-upgrade-20210526214750-510955
	I0526 21:52:57.743514  558718 preload.go:98] Checking if preload exists for k8s version v1.14.0 and runtime containerd
	I0526 21:52:57.743540  558718 preload.go:106] Found local preload: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.14.0-containerd-overlay2-amd64.tar.lz4
	I0526 21:52:57.743555  558718 cache.go:54] Caching tarball of preloaded images
	I0526 21:52:57.743657  558718 preload.go:143] Found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.14.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0526 21:52:57.743680  558718 cache.go:57] Finished verifying existence of preloaded tar for  v1.14.0 on containerd
	I0526 21:52:57.743756  558718 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/stopped-upgrade-20210526214750-510955/config.json ...
	I0526 21:52:57.743914  558718 cache.go:191] Successfully downloaded all kic artifacts
	I0526 21:52:57.743946  558718 start.go:313] acquiring machines lock for stopped-upgrade-20210526214750-510955: {Name:mk9b04ec1915adb419ceda73878e578383814cb3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0526 21:52:57.744043  558718 start.go:317] acquired machines lock for "stopped-upgrade-20210526214750-510955" in 80.273µs
	I0526 21:52:57.744063  558718 start.go:93] Skipping create...Using existing machine configuration
	I0526 21:52:57.744071  558718 fix.go:55] fixHost starting: minikube
	I0526 21:52:57.744380  558718 main.go:128] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0526 21:52:57.744430  558718 main.go:128] libmachine: Launching plugin server for driver kvm2
	I0526 21:52:57.754059  558718 main.go:128] libmachine: Plugin server listening at address 127.0.0.1:45793
	I0526 21:52:57.754483  558718 main.go:128] libmachine: () Calling .GetVersion
	I0526 21:52:57.754958  558718 main.go:128] libmachine: Using API Version  1
	I0526 21:52:57.754980  558718 main.go:128] libmachine: () Calling .SetConfigRaw
	I0526 21:52:57.755268  558718 main.go:128] libmachine: () Calling .GetMachineName
	I0526 21:52:57.755462  558718 main.go:128] libmachine: (stopped-upgrade-20210526214750-510955) Calling .DriverName
	I0526 21:52:57.755612  558718 main.go:128] libmachine: (stopped-upgrade-20210526214750-510955) Calling .GetState
	I0526 21:52:57.759362  558718 fix.go:108] recreateIfNeeded on stopped-upgrade-20210526214750-510955: state=Stopped err=<nil>
	I0526 21:52:57.759395  558718 main.go:128] libmachine: (stopped-upgrade-20210526214750-510955) Calling .DriverName
	W0526 21:52:57.759553  558718 fix.go:134] unexpected machine state, will restart: <nil>
	I0526 21:52:57.761286  558718 out.go:170] * Restarting existing  VM for "stopped-upgrade-20210526214750-510955" ...
	I0526 21:52:57.761315  558718 main.go:128] libmachine: (stopped-upgrade-20210526214750-510955) Calling .Start
	I0526 21:52:57.761471  558718 main.go:128] libmachine: (stopped-upgrade-20210526214750-510955) Ensuring networks are active...
	I0526 21:52:57.763562  558718 main.go:128] libmachine: (stopped-upgrade-20210526214750-510955) Ensuring network default is active
	I0526 21:52:57.763897  558718 main.go:128] libmachine: (stopped-upgrade-20210526214750-510955) Ensuring network minikube-net is active
	I0526 21:52:57.764372  558718 main.go:128] libmachine: (stopped-upgrade-20210526214750-510955) Getting domain xml...
	I0526 21:52:57.766592  558718 main.go:128] libmachine: (stopped-upgrade-20210526214750-510955) Creating domain...
	I0526 21:52:58.109074  558718 main.go:128] libmachine: (stopped-upgrade-20210526214750-510955) Waiting to get IP...
	I0526 21:52:58.110130  558718 main.go:128] libmachine: (stopped-upgrade-20210526214750-510955) DBG | domain stopped-upgrade-20210526214750-510955 has defined MAC address 52:54:00:77:7b:d0 in network minikube-net
	I0526 21:52:58.110615  558718 main.go:128] libmachine: (stopped-upgrade-20210526214750-510955) Found IP for machine: 192.168.50.39
	I0526 21:52:58.110643  558718 main.go:128] libmachine: (stopped-upgrade-20210526214750-510955) Reserving static IP address...
	I0526 21:52:58.110673  558718 main.go:128] libmachine: (stopped-upgrade-20210526214750-510955) DBG | domain stopped-upgrade-20210526214750-510955 has current primary IP address 192.168.50.39 and MAC address 52:54:00:77:7b:d0 in network minikube-net
	I0526 21:52:58.111193  558718 main.go:128] libmachine: (stopped-upgrade-20210526214750-510955) Reserved static IP address: 192.168.50.39
	I0526 21:52:58.111216  558718 main.go:128] libmachine: (stopped-upgrade-20210526214750-510955) Waiting for SSH to be available...
	I0526 21:52:58.111247  558718 main.go:128] libmachine: (stopped-upgrade-20210526214750-510955) DBG | found host DHCP lease matching {name: "stopped-upgrade-20210526214750-510955", mac: "52:54:00:77:7b:d0", ip: "192.168.50.39"} in network minikube-net: {Iface:virbr2 ExpiryTime:2021-05-26 22:48:19 +0000 UTC Type:0 Mac:52:54:00:77:7b:d0 Iaid: IPaddr:192.168.50.39 Prefix:24 Hostname:stopped-upgrade-20210526214750-510955 Clientid:01:52:54:00:77:7b:d0}
	I0526 21:52:58.111266  558718 main.go:128] libmachine: (stopped-upgrade-20210526214750-510955) DBG | skip adding static IP to network minikube-net - found existing host DHCP lease matching {name: "stopped-upgrade-20210526214750-510955", mac: "52:54:00:77:7b:d0", ip: "192.168.50.39"}
	I0526 21:52:58.111285  558718 main.go:128] libmachine: (stopped-upgrade-20210526214750-510955) DBG | Getting to WaitForSSH function...
	I0526 21:52:58.116924  558718 main.go:128] libmachine: (stopped-upgrade-20210526214750-510955) DBG | domain stopped-upgrade-20210526214750-510955 has defined MAC address 52:54:00:77:7b:d0 in network minikube-net
	I0526 21:52:58.117338  558718 main.go:128] libmachine: (stopped-upgrade-20210526214750-510955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:7b:d0", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2021-05-26 22:48:19 +0000 UTC Type:0 Mac:52:54:00:77:7b:d0 Iaid: IPaddr:192.168.50.39 Prefix:24 Hostname:stopped-upgrade-20210526214750-510955 Clientid:01:52:54:00:77:7b:d0}
	I0526 21:52:58.117385  558718 main.go:128] libmachine: (stopped-upgrade-20210526214750-510955) DBG | domain stopped-upgrade-20210526214750-510955 has defined IP address 192.168.50.39 and MAC address 52:54:00:77:7b:d0 in network minikube-net
	I0526 21:52:58.117435  558718 main.go:128] libmachine: (stopped-upgrade-20210526214750-510955) DBG | Using SSH client type: external
	I0526 21:52:58.117472  558718 main.go:128] libmachine: (stopped-upgrade-20210526214750-510955) DBG | Using SSH private key: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/stopped-upgrade-20210526214750-510955/id_rsa (-rw-------)
	I0526 21:52:58.117511  558718 main.go:128] libmachine: (stopped-upgrade-20210526214750-510955) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.39 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/stopped-upgrade-20210526214750-510955/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0526 21:52:58.117532  558718 main.go:128] libmachine: (stopped-upgrade-20210526214750-510955) DBG | About to run SSH command:
	I0526 21:52:58.117544  558718 main.go:128] libmachine: (stopped-upgrade-20210526214750-510955) DBG | exit 0
	I0526 21:53:20.241237  558718 main.go:128] libmachine: (stopped-upgrade-20210526214750-510955) DBG | SSH cmd err, output: <nil>: 
	I0526 21:53:20.241730  558718 main.go:128] libmachine: (stopped-upgrade-20210526214750-510955) Calling .GetConfigRaw
	I0526 21:53:20.242597  558718 main.go:128] libmachine: (stopped-upgrade-20210526214750-510955) Calling .GetIP
	I0526 21:53:20.249072  558718 main.go:128] libmachine: (stopped-upgrade-20210526214750-510955) DBG | domain stopped-upgrade-20210526214750-510955 has defined MAC address 52:54:00:77:7b:d0 in network minikube-net
	I0526 21:53:20.249496  558718 main.go:128] libmachine: (stopped-upgrade-20210526214750-510955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:7b:d0", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2021-05-26 22:53:18 +0000 UTC Type:0 Mac:52:54:00:77:7b:d0 Iaid: IPaddr:192.168.50.39 Prefix:24 Hostname:stopped-upgrade-20210526214750-510955 Clientid:01:52:54:00:77:7b:d0}
	I0526 21:53:20.249534  558718 main.go:128] libmachine: (stopped-upgrade-20210526214750-510955) DBG | domain stopped-upgrade-20210526214750-510955 has defined IP address 192.168.50.39 and MAC address 52:54:00:77:7b:d0 in network minikube-net
	I0526 21:53:20.249856  558718 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/stopped-upgrade-20210526214750-510955/config.json ...
	I0526 21:53:20.250104  558718 main.go:128] libmachine: (stopped-upgrade-20210526214750-510955) Calling .DriverName
	I0526 21:53:20.250340  558718 machine.go:88] provisioning docker machine ...
	I0526 21:53:20.250372  558718 main.go:128] libmachine: (stopped-upgrade-20210526214750-510955) Calling .DriverName
	I0526 21:53:20.250563  558718 main.go:128] libmachine: (stopped-upgrade-20210526214750-510955) Calling .GetMachineName
	I0526 21:53:20.250728  558718 buildroot.go:166] provisioning hostname "stopped-upgrade-20210526214750-510955"
	I0526 21:53:20.250757  558718 main.go:128] libmachine: (stopped-upgrade-20210526214750-510955) Calling .GetMachineName
	I0526 21:53:20.250922  558718 main.go:128] libmachine: (stopped-upgrade-20210526214750-510955) Calling .GetSSHHostname
	I0526 21:53:20.256165  558718 main.go:128] libmachine: (stopped-upgrade-20210526214750-510955) DBG | domain stopped-upgrade-20210526214750-510955 has defined MAC address 52:54:00:77:7b:d0 in network minikube-net
	I0526 21:53:20.256492  558718 main.go:128] libmachine: (stopped-upgrade-20210526214750-510955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:7b:d0", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2021-05-26 22:53:18 +0000 UTC Type:0 Mac:52:54:00:77:7b:d0 Iaid: IPaddr:192.168.50.39 Prefix:24 Hostname:stopped-upgrade-20210526214750-510955 Clientid:01:52:54:00:77:7b:d0}
	I0526 21:53:20.256522  558718 main.go:128] libmachine: (stopped-upgrade-20210526214750-510955) DBG | domain stopped-upgrade-20210526214750-510955 has defined IP address 192.168.50.39 and MAC address 52:54:00:77:7b:d0 in network minikube-net
	I0526 21:53:20.256749  558718 main.go:128] libmachine: (stopped-upgrade-20210526214750-510955) Calling .GetSSHPort
	I0526 21:53:20.256926  558718 main.go:128] libmachine: (stopped-upgrade-20210526214750-510955) Calling .GetSSHKeyPath
	I0526 21:53:20.257092  558718 main.go:128] libmachine: (stopped-upgrade-20210526214750-510955) Calling .GetSSHKeyPath
	I0526 21:53:20.257243  558718 main.go:128] libmachine: (stopped-upgrade-20210526214750-510955) Calling .GetSSHUsername
	I0526 21:53:20.257443  558718 main.go:128] libmachine: Using SSH client type: native
	I0526 21:53:20.257653  558718 main.go:128] libmachine: &{{{<nil> 0 [] [] []} docker [0x802c00] 0x802bc0 <nil>  [] 0s} 192.168.50.39 22 <nil> <nil>}
	I0526 21:53:20.257674  558718 main.go:128] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-20210526214750-510955 && echo "stopped-upgrade-20210526214750-510955" | sudo tee /etc/hostname
	I0526 21:53:20.367109  558718 main.go:128] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-20210526214750-510955
	
	I0526 21:53:20.367143  558718 main.go:128] libmachine: (stopped-upgrade-20210526214750-510955) Calling .GetSSHHostname
	I0526 21:53:20.372969  558718 main.go:128] libmachine: (stopped-upgrade-20210526214750-510955) DBG | domain stopped-upgrade-20210526214750-510955 has defined MAC address 52:54:00:77:7b:d0 in network minikube-net
	I0526 21:53:20.373265  558718 main.go:128] libmachine: (stopped-upgrade-20210526214750-510955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:7b:d0", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2021-05-26 22:53:18 +0000 UTC Type:0 Mac:52:54:00:77:7b:d0 Iaid: IPaddr:192.168.50.39 Prefix:24 Hostname:stopped-upgrade-20210526214750-510955 Clientid:01:52:54:00:77:7b:d0}
	I0526 21:53:20.373294  558718 main.go:128] libmachine: (stopped-upgrade-20210526214750-510955) DBG | domain stopped-upgrade-20210526214750-510955 has defined IP address 192.168.50.39 and MAC address 52:54:00:77:7b:d0 in network minikube-net
	I0526 21:53:20.373459  558718 main.go:128] libmachine: (stopped-upgrade-20210526214750-510955) Calling .GetSSHPort
	I0526 21:53:20.373635  558718 main.go:128] libmachine: (stopped-upgrade-20210526214750-510955) Calling .GetSSHKeyPath
	I0526 21:53:20.373819  558718 main.go:128] libmachine: (stopped-upgrade-20210526214750-510955) Calling .GetSSHKeyPath
	I0526 21:53:20.373956  558718 main.go:128] libmachine: (stopped-upgrade-20210526214750-510955) Calling .GetSSHUsername
	I0526 21:53:20.374085  558718 main.go:128] libmachine: Using SSH client type: native
	I0526 21:53:20.374233  558718 main.go:128] libmachine: &{{{<nil> 0 [] [] []} docker [0x802c00] 0x802bc0 <nil>  [] 0s} 192.168.50.39 22 <nil> <nil>}
	I0526 21:53:20.374251  558718 main.go:128] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-20210526214750-510955' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-20210526214750-510955/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-20210526214750-510955' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0526 21:53:20.458305  558718 main.go:128] libmachine: SSH cmd err, output: <nil>: 
	I0526 21:53:20.458335  558718 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikub
e/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube}
	I0526 21:53:20.458377  558718 buildroot.go:174] setting up certificates
	I0526 21:53:20.458388  558718 provision.go:83] configureAuth start
	I0526 21:53:20.458401  558718 main.go:128] libmachine: (stopped-upgrade-20210526214750-510955) Calling .GetMachineName
	I0526 21:53:20.458659  558718 main.go:128] libmachine: (stopped-upgrade-20210526214750-510955) Calling .GetIP
	I0526 21:53:20.464396  558718 main.go:128] libmachine: (stopped-upgrade-20210526214750-510955) DBG | domain stopped-upgrade-20210526214750-510955 has defined MAC address 52:54:00:77:7b:d0 in network minikube-net
	I0526 21:53:20.464736  558718 main.go:128] libmachine: (stopped-upgrade-20210526214750-510955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:7b:d0", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2021-05-26 22:53:18 +0000 UTC Type:0 Mac:52:54:00:77:7b:d0 Iaid: IPaddr:192.168.50.39 Prefix:24 Hostname:stopped-upgrade-20210526214750-510955 Clientid:01:52:54:00:77:7b:d0}
	I0526 21:53:20.464765  558718 main.go:128] libmachine: (stopped-upgrade-20210526214750-510955) DBG | domain stopped-upgrade-20210526214750-510955 has defined IP address 192.168.50.39 and MAC address 52:54:00:77:7b:d0 in network minikube-net
	I0526 21:53:20.464923  558718 main.go:128] libmachine: (stopped-upgrade-20210526214750-510955) Calling .GetSSHHostname
	I0526 21:53:20.469612  558718 main.go:128] libmachine: (stopped-upgrade-20210526214750-510955) DBG | domain stopped-upgrade-20210526214750-510955 has defined MAC address 52:54:00:77:7b:d0 in network minikube-net
	I0526 21:53:20.469922  558718 main.go:128] libmachine: (stopped-upgrade-20210526214750-510955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:7b:d0", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2021-05-26 22:53:18 +0000 UTC Type:0 Mac:52:54:00:77:7b:d0 Iaid: IPaddr:192.168.50.39 Prefix:24 Hostname:stopped-upgrade-20210526214750-510955 Clientid:01:52:54:00:77:7b:d0}
	I0526 21:53:20.469955  558718 main.go:128] libmachine: (stopped-upgrade-20210526214750-510955) DBG | domain stopped-upgrade-20210526214750-510955 has defined IP address 192.168.50.39 and MAC address 52:54:00:77:7b:d0 in network minikube-net
	I0526 21:53:20.470036  558718 provision.go:137] copyHostCerts
	I0526 21:53:20.470101  558718 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cert.pem, removing ...
	I0526 21:53:20.470115  558718 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cert.pem
	I0526 21:53:20.470167  558718 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cert.pem (1123 bytes)
	I0526 21:53:20.470271  558718 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/key.pem, removing ...
	I0526 21:53:20.470282  558718 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/key.pem
	I0526 21:53:20.470306  558718 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/key.pem (1679 bytes)
	I0526 21:53:20.470371  558718 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/ca.pem, removing ...
	I0526 21:53:20.470379  558718 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/ca.pem
	I0526 21:53:20.470394  558718 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/ca.pem (1078 bytes)
	I0526 21:53:20.470448  558718 provision.go:111] generating server cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-20210526214750-510955 san=[192.168.50.39 192.168.50.39 localhost 127.0.0.1 minikube stopped-upgrade-20210526214750-510955]
	I0526 21:53:20.745961  558718 provision.go:171] copyRemoteCerts
	I0526 21:53:20.746033  558718 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0526 21:53:20.746064  558718 main.go:128] libmachine: (stopped-upgrade-20210526214750-510955) Calling .GetSSHHostname
	I0526 21:53:20.751565  558718 main.go:128] libmachine: (stopped-upgrade-20210526214750-510955) DBG | domain stopped-upgrade-20210526214750-510955 has defined MAC address 52:54:00:77:7b:d0 in network minikube-net
	I0526 21:53:20.751908  558718 main.go:128] libmachine: (stopped-upgrade-20210526214750-510955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:7b:d0", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2021-05-26 22:53:18 +0000 UTC Type:0 Mac:52:54:00:77:7b:d0 Iaid: IPaddr:192.168.50.39 Prefix:24 Hostname:stopped-upgrade-20210526214750-510955 Clientid:01:52:54:00:77:7b:d0}
	I0526 21:53:20.751948  558718 main.go:128] libmachine: (stopped-upgrade-20210526214750-510955) DBG | domain stopped-upgrade-20210526214750-510955 has defined IP address 192.168.50.39 and MAC address 52:54:00:77:7b:d0 in network minikube-net
	I0526 21:53:20.752066  558718 main.go:128] libmachine: (stopped-upgrade-20210526214750-510955) Calling .GetSSHPort
	I0526 21:53:20.752286  558718 main.go:128] libmachine: (stopped-upgrade-20210526214750-510955) Calling .GetSSHKeyPath
	I0526 21:53:20.752497  558718 main.go:128] libmachine: (stopped-upgrade-20210526214750-510955) Calling .GetSSHUsername
	I0526 21:53:20.752683  558718 sshutil.go:53] new ssh client: &{IP:192.168.50.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/stopped-upgrade-20210526214750-510955/id_rsa Username:docker}
	I0526 21:53:20.819533  558718 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0526 21:53:20.834354  558718 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/server.pem --> /etc/docker/server.pem (1281 bytes)
	I0526 21:53:20.852389  558718 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0526 21:53:20.868026  558718 provision.go:86] duration metric: configureAuth took 409.624926ms
	I0526 21:53:20.868053  558718 buildroot.go:189] setting minikube options for container-runtime
	I0526 21:53:20.868216  558718 machine.go:91] provisioned docker machine in 617.857421ms
	I0526 21:53:20.868232  558718 start.go:267] post-start starting for "stopped-upgrade-20210526214750-510955" (driver="kvm2")
	I0526 21:53:20.868242  558718 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0526 21:53:20.868275  558718 main.go:128] libmachine: (stopped-upgrade-20210526214750-510955) Calling .DriverName
	I0526 21:53:20.868573  558718 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0526 21:53:20.868615  558718 main.go:128] libmachine: (stopped-upgrade-20210526214750-510955) Calling .GetSSHHostname
	I0526 21:53:20.874156  558718 main.go:128] libmachine: (stopped-upgrade-20210526214750-510955) DBG | domain stopped-upgrade-20210526214750-510955 has defined MAC address 52:54:00:77:7b:d0 in network minikube-net
	I0526 21:53:20.874521  558718 main.go:128] libmachine: (stopped-upgrade-20210526214750-510955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:7b:d0", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2021-05-26 22:53:18 +0000 UTC Type:0 Mac:52:54:00:77:7b:d0 Iaid: IPaddr:192.168.50.39 Prefix:24 Hostname:stopped-upgrade-20210526214750-510955 Clientid:01:52:54:00:77:7b:d0}
	I0526 21:53:20.874565  558718 main.go:128] libmachine: (stopped-upgrade-20210526214750-510955) DBG | domain stopped-upgrade-20210526214750-510955 has defined IP address 192.168.50.39 and MAC address 52:54:00:77:7b:d0 in network minikube-net
	I0526 21:53:20.874641  558718 main.go:128] libmachine: (stopped-upgrade-20210526214750-510955) Calling .GetSSHPort
	I0526 21:53:20.874826  558718 main.go:128] libmachine: (stopped-upgrade-20210526214750-510955) Calling .GetSSHKeyPath
	I0526 21:53:20.874980  558718 main.go:128] libmachine: (stopped-upgrade-20210526214750-510955) Calling .GetSSHUsername
	I0526 21:53:20.875108  558718 sshutil.go:53] new ssh client: &{IP:192.168.50.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/stopped-upgrade-20210526214750-510955/id_rsa Username:docker}
	I0526 21:53:20.943320  558718 ssh_runner.go:149] Run: cat /etc/os-release
	I0526 21:53:20.947501  558718 info.go:137] Remote host: Buildroot 2018.05
	I0526 21:53:20.947523  558718 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/addons for local assets ...
	I0526 21:53:20.947601  558718 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/files for local assets ...
	I0526 21:53:20.947745  558718 start.go:270] post-start completed in 79.50254ms
	I0526 21:53:20.947774  558718 main.go:128] libmachine: (stopped-upgrade-20210526214750-510955) Calling .DriverName
	I0526 21:53:20.948000  558718 main.go:128] libmachine: (stopped-upgrade-20210526214750-510955) Calling .GetSSHHostname
	I0526 21:53:20.953646  558718 main.go:128] libmachine: (stopped-upgrade-20210526214750-510955) DBG | domain stopped-upgrade-20210526214750-510955 has defined MAC address 52:54:00:77:7b:d0 in network minikube-net
	I0526 21:53:20.954029  558718 main.go:128] libmachine: (stopped-upgrade-20210526214750-510955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:7b:d0", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2021-05-26 22:53:18 +0000 UTC Type:0 Mac:52:54:00:77:7b:d0 Iaid: IPaddr:192.168.50.39 Prefix:24 Hostname:stopped-upgrade-20210526214750-510955 Clientid:01:52:54:00:77:7b:d0}
	I0526 21:53:20.954052  558718 main.go:128] libmachine: (stopped-upgrade-20210526214750-510955) DBG | domain stopped-upgrade-20210526214750-510955 has defined IP address 192.168.50.39 and MAC address 52:54:00:77:7b:d0 in network minikube-net
	I0526 21:53:20.954286  558718 main.go:128] libmachine: (stopped-upgrade-20210526214750-510955) Calling .GetSSHPort
	I0526 21:53:20.954485  558718 main.go:128] libmachine: (stopped-upgrade-20210526214750-510955) Calling .GetSSHKeyPath
	I0526 21:53:20.954618  558718 main.go:128] libmachine: (stopped-upgrade-20210526214750-510955) Calling .GetSSHKeyPath
	I0526 21:53:20.954740  558718 main.go:128] libmachine: (stopped-upgrade-20210526214750-510955) Calling .GetSSHUsername
	I0526 21:53:20.954881  558718 main.go:128] libmachine: Using SSH client type: native
	I0526 21:53:20.955019  558718 main.go:128] libmachine: &{{{<nil> 0 [] [] []} docker [0x802c00] 0x802bc0 <nil>  [] 0s} 192.168.50.39 22 <nil> <nil>}
	I0526 21:53:20.955030  558718 main.go:128] libmachine: About to run SSH command:
	date +%s.%N
	I0526 21:53:21.041029  558718 main.go:128] libmachine: SSH cmd err, output: <nil>: 1622066000.937222705
	
	I0526 21:53:21.041052  558718 fix.go:212] guest clock: 1622066000.937222705
	I0526 21:53:21.041062  558718 fix.go:225] Guest: 2021-05-26 21:53:20.937222705 +0000 UTC Remote: 2021-05-26 21:53:20.94797899 +0000 UTC m=+23.384380672 (delta=-10.756285ms)
	I0526 21:53:21.041121  558718 fix.go:196] guest clock delta is within tolerance: -10.756285ms
	I0526 21:53:21.041129  558718 fix.go:57] fixHost completed within 23.297059758s
	I0526 21:53:21.041136  558718 start.go:80] releasing machines lock for "stopped-upgrade-20210526214750-510955", held for 23.297081147s
	I0526 21:53:21.041183  558718 main.go:128] libmachine: (stopped-upgrade-20210526214750-510955) Calling .DriverName
	I0526 21:53:21.041478  558718 main.go:128] libmachine: (stopped-upgrade-20210526214750-510955) Calling .GetIP
	I0526 21:53:21.047125  558718 main.go:128] libmachine: (stopped-upgrade-20210526214750-510955) DBG | domain stopped-upgrade-20210526214750-510955 has defined MAC address 52:54:00:77:7b:d0 in network minikube-net
	I0526 21:53:21.047456  558718 main.go:128] libmachine: (stopped-upgrade-20210526214750-510955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:7b:d0", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2021-05-26 22:53:18 +0000 UTC Type:0 Mac:52:54:00:77:7b:d0 Iaid: IPaddr:192.168.50.39 Prefix:24 Hostname:stopped-upgrade-20210526214750-510955 Clientid:01:52:54:00:77:7b:d0}
	I0526 21:53:21.047488  558718 main.go:128] libmachine: (stopped-upgrade-20210526214750-510955) DBG | domain stopped-upgrade-20210526214750-510955 has defined IP address 192.168.50.39 and MAC address 52:54:00:77:7b:d0 in network minikube-net
	I0526 21:53:21.047623  558718 main.go:128] libmachine: (stopped-upgrade-20210526214750-510955) Calling .DriverName
	I0526 21:53:21.047809  558718 main.go:128] libmachine: (stopped-upgrade-20210526214750-510955) Calling .DriverName
	I0526 21:53:21.048360  558718 main.go:128] libmachine: (stopped-upgrade-20210526214750-510955) Calling .DriverName
	I0526 21:53:21.048599  558718 ssh_runner.go:149] Run: systemctl --version
	I0526 21:53:21.048628  558718 main.go:128] libmachine: (stopped-upgrade-20210526214750-510955) Calling .GetSSHHostname
	I0526 21:53:21.048656  558718 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0526 21:53:21.048713  558718 main.go:128] libmachine: (stopped-upgrade-20210526214750-510955) Calling .GetSSHHostname
	I0526 21:53:21.055062  558718 main.go:128] libmachine: (stopped-upgrade-20210526214750-510955) DBG | domain stopped-upgrade-20210526214750-510955 has defined MAC address 52:54:00:77:7b:d0 in network minikube-net
	I0526 21:53:21.055092  558718 main.go:128] libmachine: (stopped-upgrade-20210526214750-510955) DBG | domain stopped-upgrade-20210526214750-510955 has defined MAC address 52:54:00:77:7b:d0 in network minikube-net
	I0526 21:53:21.055479  558718 main.go:128] libmachine: (stopped-upgrade-20210526214750-510955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:7b:d0", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2021-05-26 22:53:18 +0000 UTC Type:0 Mac:52:54:00:77:7b:d0 Iaid: IPaddr:192.168.50.39 Prefix:24 Hostname:stopped-upgrade-20210526214750-510955 Clientid:01:52:54:00:77:7b:d0}
	I0526 21:53:21.055515  558718 main.go:128] libmachine: (stopped-upgrade-20210526214750-510955) DBG | domain stopped-upgrade-20210526214750-510955 has defined IP address 192.168.50.39 and MAC address 52:54:00:77:7b:d0 in network minikube-net
	I0526 21:53:21.055553  558718 main.go:128] libmachine: (stopped-upgrade-20210526214750-510955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:7b:d0", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2021-05-26 22:53:18 +0000 UTC Type:0 Mac:52:54:00:77:7b:d0 Iaid: IPaddr:192.168.50.39 Prefix:24 Hostname:stopped-upgrade-20210526214750-510955 Clientid:01:52:54:00:77:7b:d0}
	I0526 21:53:21.055580  558718 main.go:128] libmachine: (stopped-upgrade-20210526214750-510955) DBG | domain stopped-upgrade-20210526214750-510955 has defined IP address 192.168.50.39 and MAC address 52:54:00:77:7b:d0 in network minikube-net
	I0526 21:53:21.055644  558718 main.go:128] libmachine: (stopped-upgrade-20210526214750-510955) Calling .GetSSHPort
	I0526 21:53:21.055734  558718 main.go:128] libmachine: (stopped-upgrade-20210526214750-510955) Calling .GetSSHPort
	I0526 21:53:21.055836  558718 main.go:128] libmachine: (stopped-upgrade-20210526214750-510955) Calling .GetSSHKeyPath
	I0526 21:53:21.055879  558718 main.go:128] libmachine: (stopped-upgrade-20210526214750-510955) Calling .GetSSHKeyPath
	I0526 21:53:21.056022  558718 main.go:128] libmachine: (stopped-upgrade-20210526214750-510955) Calling .GetSSHUsername
	I0526 21:53:21.056035  558718 main.go:128] libmachine: (stopped-upgrade-20210526214750-510955) Calling .GetSSHUsername
	I0526 21:53:21.056199  558718 sshutil.go:53] new ssh client: &{IP:192.168.50.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/stopped-upgrade-20210526214750-510955/id_rsa Username:docker}
	I0526 21:53:21.056201  558718 sshutil.go:53] new ssh client: &{IP:192.168.50.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/machines/stopped-upgrade-20210526214750-510955/id_rsa Username:docker}
	I0526 21:53:21.142596  558718 preload.go:98] Checking if preload exists for k8s version v1.14.0 and runtime containerd
	I0526 21:53:21.142655  558718 preload.go:106] Found local preload: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.14.0-containerd-overlay2-amd64.tar.lz4
	I0526 21:53:21.142726  558718 ssh_runner.go:149] Run: sudo crictl images --output json
	I0526 21:53:31.160518  558718 ssh_runner.go:189] Completed: sudo crictl images --output json: (10.017762316s)
	I0526 21:53:32.212527  558718 ssh_runner.go:149] Run: which lz4
	W0526 21:53:32.217786  558718 out.go:235] * Existing disk is missing new features (lz4). To upgrade, run 'minikube delete'
	* Existing disk is missing new features (lz4). To upgrade, run 'minikube delete'
	I0526 21:53:32.217891  558718 cache.go:108] acquiring lock: {Name:mk1f8d1596dae0678a4382cc12c2651bcd889747 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0526 21:53:32.217900  558718 cache.go:108] acquiring lock: {Name:mke44dfbe61882d7bec256025379350161525440 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0526 21:53:32.217938  558718 cache.go:108] acquiring lock: {Name:mk2f8ceca8f30e3cca1664a31ab426848054fea8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0526 21:53:32.218024  558718 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4 exists
	I0526 21:53:32.218049  558718 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/k8s.gcr.io/pause_3.1 exists
	I0526 21:53:32.218052  558718 cache.go:97] cache image "docker.io/kubernetesui/metrics-scraper:v1.0.4" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4" took 167.752µs
	I0526 21:53:32.218062  558718 cache.go:97] cache image "k8s.gcr.io/pause:3.1" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/k8s.gcr.io/pause_3.1" took 130.147µs
	I0526 21:53:32.218069  558718 cache.go:81] save to tar file docker.io/kubernetesui/metrics-scraper:v1.0.4 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4 succeeded
	I0526 21:53:32.218072  558718 cache.go:81] save to tar file k8s.gcr.io/pause:3.1 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/k8s.gcr.io/pause_3.1 succeeded
	I0526 21:53:32.218074  558718 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.14.0 exists
	I0526 21:53:32.218108  558718 cache.go:97] cache image "k8s.gcr.io/kube-apiserver:v1.14.0" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.14.0" took 215.991µs
	I0526 21:53:32.218128  558718 cache.go:81] save to tar file k8s.gcr.io/kube-apiserver:v1.14.0 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.14.0 succeeded
	I0526 21:53:32.218101  558718 cache.go:108] acquiring lock: {Name:mk9b4f9956918ff45c7d0bc5a2e190e67e357cd6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0526 21:53:32.218125  558718 cache.go:108] acquiring lock: {Name:mkc05781c49d9abf0f4755877d92a7aa68074627 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0526 21:53:32.218130  558718 cache.go:108] acquiring lock: {Name:mk8ca30a0d08f08ce2afcc7fddf795e6197a93f3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0526 21:53:32.218196  558718 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/k8s.gcr.io/etcd_3.3.10 exists
	I0526 21:53:32.218216  558718 cache.go:97] cache image "k8s.gcr.io/etcd:3.3.10" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/k8s.gcr.io/etcd_3.3.10" took 123.666µs
	I0526 21:53:32.218236  558718 cache.go:81] save to tar file k8s.gcr.io/etcd:3.3.10 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/k8s.gcr.io/etcd_3.3.10 succeeded
	I0526 21:53:32.218248  558718 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.14.0 exists
	I0526 21:53:32.218249  558718 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/k8s.gcr.io/coredns_1.3.1 exists
	I0526 21:53:32.218265  558718 cache.go:97] cache image "k8s.gcr.io/coredns:1.3.1" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/k8s.gcr.io/coredns_1.3.1" took 141.338µs
	I0526 21:53:32.218277  558718 cache.go:81] save to tar file k8s.gcr.io/coredns:1.3.1 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/k8s.gcr.io/coredns_1.3.1 succeeded
	I0526 21:53:32.218264  558718 cache.go:97] cache image "k8s.gcr.io/kube-scheduler:v1.14.0" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.14.0" took 135.447µs
	I0526 21:53:32.218285  558718 cache.go:81] save to tar file k8s.gcr.io/kube-scheduler:v1.14.0 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.14.0 succeeded
	I0526 21:53:32.218284  558718 cache.go:108] acquiring lock: {Name:mkdb4415bab2f9b78a174a96dc8187fdae653e8c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0526 21:53:32.218302  558718 cache.go:108] acquiring lock: {Name:mk22116c0ad376cadb0916c6d017e3f4802f859e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0526 21:53:32.218343  558718 cache.go:108] acquiring lock: {Name:mk5b8c18f6c4b09710b17bc3abbf692f9764a7b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0526 21:53:32.218379  558718 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.14.0 exists
	I0526 21:53:32.218391  558718 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0 exists
	I0526 21:53:32.218392  558718 cache.go:97] cache image "k8s.gcr.io/kube-proxy:v1.14.0" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.14.0" took 111.369µs
	I0526 21:53:32.218408  558718 cache.go:81] save to tar file k8s.gcr.io/kube-proxy:v1.14.0 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.14.0 succeeded
	I0526 21:53:32.218355  558718 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0526 21:53:32.218087  558718 cache.go:108] acquiring lock: {Name:mk4792d6a76fb1afa13932460a130feef7aa9420 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0526 21:53:32.218420  558718 cache.go:97] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5" took 123.631µs
	I0526 21:53:32.218432  558718 cache.go:81] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0526 21:53:32.218403  558718 cache.go:97] cache image "docker.io/kubernetesui/dashboard:v2.1.0" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0" took 62.347µs
	I0526 21:53:32.218441  558718 cache.go:81] save to tar file docker.io/kubernetesui/dashboard:v2.1.0 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0 succeeded
	I0526 21:53:32.218463  558718 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.14.0 exists
	I0526 21:53:32.218519  558718 cache.go:97] cache image "k8s.gcr.io/kube-controller-manager:v1.14.0" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.14.0" took 427.771µs
	I0526 21:53:32.218550  558718 cache.go:81] save to tar file k8s.gcr.io/kube-controller-manager:v1.14.0 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.14.0 succeeded
	I0526 21:53:32.218560  558718 cache.go:88] Successfully saved all images to host disk.
	I0526 21:53:32.218622  558718 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0526 21:53:32.232480  558718 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0526 21:53:32.244247  558718 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0526 21:53:32.256939  558718 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %s "cm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKCltncnBjXQogIGFkZHJlc3MgPSAiL3J1bi9jb250YWluZXJkL2NvbnRhaW5lcmQuc29jayIKICB1aWQgPSAwCiAgZ2lkID0gMAogIG1heF9yZWN2X21lc3NhZ2Vfc2l6ZSA9IDE2Nzc3MjE2CiAgbWF4X3NlbmRfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKCltkZWJ1Z10KICBhZGRyZXNzID0gIiIKICB1aWQgPSAwCiAgZ2lkID0gMAogIGxldmVsID0gIiIKClttZXRyaWNzXQogIGFkZHJlc3MgPSAiIgogIGdycGNfaGlzdG9ncmFtID0gZmFsc2UKCltjZ3JvdXBdCiAgcGF0aCA9ICIiCgpbcGx1Z2luc10KICBbcGx1Z2lucy5jZ3JvdXBzXQogICAgbm9fcHJvbWV0aGV1cyA9IGZhbHNlCiAgW3BsdWdpbnMuY3JpXQogICAgc3RyZWFtX3NlcnZlcl9hZGRyZXNzID0gIiIKICAgIHN0cmVhbV9zZXJ2ZXJfcG9ydCA9ICIxMDAxMCIKICAgIGVuYWJsZV9zZWxpbnV4ID0gZmFsc2UKICAgIHNhbmRib3hfaW1hZ2UgPSAiazhzLmdjci5pby9wYXVzZTozLjEiCiAgICBzdGF0c19jb2xsZWN0X3BlcmlvZCA9IDEwCiAgICBzeXN0ZW1kX2Nncm91cCA9IGZhbHNlCiAgICBlbmFibGVfdGxzX3N0cmVhbWluZyA9IGZhbHNlCiAgICBtYXhfY29udGFpbmVyX2xvZ19saW5lX3NpemUgPSAxNjM
4NAogICAgW3BsdWdpbnMuY3JpLmNvbnRhaW5lcmRdCiAgICAgIHNuYXBzaG90dGVyID0gIm92ZXJsYXlmcyIKICAgICAgW3BsdWdpbnMuY3JpLmNvbnRhaW5lcmQuZGVmYXVsdF9ydW50aW1lXQogICAgICAgIHJ1bnRpbWVfdHlwZSA9ICJpby5jb250YWluZXJkLnJ1bmMudjIiCiAgICAgICAgW3BsdWdpbnMuY3JpLmNvbnRhaW5lcmQuZGVmYXVsdF9ydW50aW1lLm9wdGlvbnNdCiAgICAgICAgICBOb1Bpdm90Um9vdCA9IHRydWUKICAgICAgW3BsdWdpbnMuY3JpLmNvbnRhaW5lcmQudW50cnVzdGVkX3dvcmtsb2FkX3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gIiIKICAgICAgICBydW50aW1lX2VuZ2luZSA9ICIiCiAgICAgICAgcnVudGltZV9yb290ID0gIiIKICAgIFtwbHVnaW5zLmNyaS5jbmldCiAgICAgIGJpbl9kaXIgPSAiL29wdC9jbmkvYmluIgogICAgICBjb25mX2RpciA9ICIvZXRjL2NuaS9uZXQuZCIKICAgICAgY29uZl90ZW1wbGF0ZSA9ICIiCiAgICBbcGx1Z2lucy5jcmkucmVnaXN0cnldCiAgICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeS5taXJyb3JzXQogICAgICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeS5taXJyb3JzLiJkb2NrZXIuaW8iXQogICAgICAgICAgZW5kcG9pbnQgPSBbImh0dHBzOi8vcmVnaXN0cnktMS5kb2NrZXIuaW8iXQogICAgICAgIFtwbHVnaW5zLmRpZmYtc2VydmljZV0KICAgIGRlZmF1bHQgPSBbIndhbGtpbmciXQogIFtwbHVnaW5zLnNjaGVkdWxlcl0KICAgIHBhdXNlX3RocmVzaG9sZCA9IDAuMDI
KICAgIGRlbGV0aW9uX3RocmVzaG9sZCA9IDAKICAgIG11dGF0aW9uX3RocmVzaG9sZCA9IDEwMAogICAgc2NoZWR1bGVfZGVsYXkgPSAiMHMiCiAgICBzdGFydHVwX2RlbGF5ID0gIjEwMG1zIgo=" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0526 21:53:32.269200  558718 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0526 21:53:32.276624  558718 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0526 21:53:32.276681  558718 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0526 21:53:32.297446  558718 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0526 21:53:32.305085  558718 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0526 21:53:32.389076  558718 ssh_runner.go:149] Run: sudo systemctl restart containerd
	I0526 21:53:32.423645  558718 start.go:376] Will wait 60s for socket path /run/containerd/containerd.sock
	I0526 21:53:32.423707  558718 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0526 21:53:32.430843  558718 retry.go:31] will retry after 1.104660288s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/run/containerd/containerd.sock': No such file or directory
	I0526 21:53:33.536194  558718 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0526 21:53:33.541343  558718 start.go:401] Will wait 60s for crictl version
	I0526 21:53:33.541418  558718 ssh_runner.go:149] Run: sudo crictl version
	I0526 21:53:33.558214  558718 start.go:410] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.2.0
	RuntimeApiVersion:  v1alpha2
	I0526 21:53:33.558275  558718 ssh_runner.go:149] Run: containerd --version
	I0526 21:53:33.600567  558718 out.go:170] * Preparing Kubernetes v1.14.0 on containerd 1.2.0 ...
	I0526 21:53:33.600610  558718 main.go:128] libmachine: (stopped-upgrade-20210526214750-510955) Calling .GetIP
	I0526 21:53:33.606495  558718 main.go:128] libmachine: (stopped-upgrade-20210526214750-510955) DBG | domain stopped-upgrade-20210526214750-510955 has defined MAC address 52:54:00:77:7b:d0 in network minikube-net
	I0526 21:53:33.606851  558718 main.go:128] libmachine: (stopped-upgrade-20210526214750-510955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:7b:d0", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2021-05-26 22:53:18 +0000 UTC Type:0 Mac:52:54:00:77:7b:d0 Iaid: IPaddr:192.168.50.39 Prefix:24 Hostname:stopped-upgrade-20210526214750-510955 Clientid:01:52:54:00:77:7b:d0}
	I0526 21:53:33.606882  558718 main.go:128] libmachine: (stopped-upgrade-20210526214750-510955) DBG | domain stopped-upgrade-20210526214750-510955 has defined IP address 192.168.50.39 and MAC address 52:54:00:77:7b:d0 in network minikube-net
	I0526 21:53:33.607070  558718 ssh_runner.go:149] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0526 21:53:33.610317  558718 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0526 21:53:33.618498  558718 localpath.go:92] copying /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/client.crt -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/stopped-upgrade-20210526214750-510955/client.crt
	I0526 21:53:33.618611  558718 localpath.go:117] copying /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/client.key -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/stopped-upgrade-20210526214750-510955/client.key
	I0526 21:53:33.618722  558718 preload.go:98] Checking if preload exists for k8s version v1.14.0 and runtime containerd
	I0526 21:53:33.618743  558718 preload.go:106] Found local preload: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.14.0-containerd-overlay2-amd64.tar.lz4
	I0526 21:53:33.618779  558718 ssh_runner.go:149] Run: sudo crictl images --output json
	I0526 21:53:33.634939  558718 containerd.go:566] couldn't find preloaded image for "gcr.io/k8s-minikube/storage-provisioner:v5". assuming images are not preloaded.
	I0526 21:53:33.634988  558718 ssh_runner.go:149] Run: which lz4
	I0526 21:53:33.638362  558718 kubeadm.go:870] preload failed, will try to load cached images: lz4
	I0526 21:53:33.638427  558718 ssh_runner.go:149] Run: sudo crictl images --output json
	I0526 21:53:33.654864  558718 containerd.go:566] couldn't find preloaded image for "gcr.io/k8s-minikube/storage-provisioner:v5". assuming images are not preloaded.
	I0526 21:53:33.654884  558718 cache_images.go:78] LoadImages start: [k8s.gcr.io/kube-apiserver:v1.14.0 k8s.gcr.io/kube-controller-manager:v1.14.0 k8s.gcr.io/kube-scheduler:v1.14.0 k8s.gcr.io/kube-proxy:v1.14.0 k8s.gcr.io/pause:3.1 k8s.gcr.io/etcd:3.3.10 k8s.gcr.io/coredns:1.3.1 gcr.io/k8s-minikube/storage-provisioner:v5 docker.io/kubernetesui/dashboard:v2.1.0 docker.io/kubernetesui/metrics-scraper:v1.0.4]
	I0526 21:53:33.654970  558718 image.go:162] retrieving image: docker.io/kubernetesui/metrics-scraper:v1.0.4
	I0526 21:53:33.654995  558718 image.go:162] retrieving image: k8s.gcr.io/etcd:3.3.10
	I0526 21:53:33.655015  558718 image.go:162] retrieving image: docker.io/kubernetesui/dashboard:v2.1.0
	I0526 21:53:33.655051  558718 image.go:162] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0526 21:53:33.654995  558718 image.go:162] retrieving image: k8s.gcr.io/kube-apiserver:v1.14.0
	I0526 21:53:33.655080  558718 image.go:162] retrieving image: k8s.gcr.io/kube-scheduler:v1.14.0
	I0526 21:53:33.655019  558718 image.go:162] retrieving image: k8s.gcr.io/kube-controller-manager:v1.14.0
	I0526 21:53:33.654971  558718 image.go:162] retrieving image: k8s.gcr.io/coredns:1.3.1
	I0526 21:53:33.654972  558718 image.go:162] retrieving image: k8s.gcr.io/pause:3.1
	I0526 21:53:33.655383  558718 image.go:162] retrieving image: k8s.gcr.io/kube-proxy:v1.14.0
	I0526 21:53:33.676843  558718 image.go:200] found k8s.gcr.io/pause:3.1 locally: &{Image:0xc001136120}
	I0526 21:53:33.676919  558718 ssh_runner.go:149] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/pause:3.1 | grep da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e"
	I0526 21:53:33.977224  558718 cache_images.go:106] "k8s.gcr.io/pause:3.1" needs transfer: "k8s.gcr.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0526 21:53:33.977269  558718 cri.go:205] Removing image: k8s.gcr.io/pause:3.1
	I0526 21:53:33.977316  558718 ssh_runner.go:149] Run: which crictl
	I0526 21:53:33.982746  558718 ssh_runner.go:149] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/pause:3.1
	I0526 21:53:34.166570  558718 cache_images.go:279] Loading image from: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/k8s.gcr.io/pause_3.1
	I0526 21:53:34.166667  558718 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.1
	I0526 21:53:34.172586  558718 ssh_runner.go:306] existence check for /var/lib/minikube/images/pause_3.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/pause_3.1': No such file or directory
	I0526 21:53:34.172614  558718 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/k8s.gcr.io/pause_3.1 --> /var/lib/minikube/images/pause_3.1 (356864 bytes)
	I0526 21:53:34.195039  558718 containerd.go:260] Loading image: /var/lib/minikube/images/pause_3.1
	I0526 21:53:34.195090  558718 ssh_runner.go:149] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.1
	I0526 21:53:34.452921  558718 image.go:200] found gcr.io/k8s-minikube/storage-provisioner:v5 locally: &{Image:0xc00133e080}
	I0526 21:53:34.453021  558718 ssh_runner.go:149] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep gcr.io/k8s-minikube/storage-provisioner:v5 | grep 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"
	I0526 21:53:34.573146  558718 cache_images.go:308] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/k8s.gcr.io/pause_3.1 from cache
	I0526 21:53:34.661825  558718 image.go:200] found index.docker.io/kubernetesui/metrics-scraper:v1.0.4 locally: &{Image:0xc00133e080}
	I0526 21:53:34.661903  558718 ssh_runner.go:149] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep docker.io/kubernetesui/metrics-scraper:v1.0.4 | grep 86262685d9abb35698a4e03ed13f9ded5b97c6c85b466285e4f367e5232eeee4"
	I0526 21:53:34.757490  558718 image.go:200] found k8s.gcr.io/coredns:1.3.1 locally: &{Image:0xc00133e080}
	I0526 21:53:34.757610  558718 ssh_runner.go:149] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/coredns:1.3.1 | grep eb516548c180f8a6e0235034ccee2428027896af16a509786da13022fe95fe8c"
	I0526 21:53:34.972942  558718 cache_images.go:106] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0526 21:53:34.972988  558718 cri.go:205] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0526 21:53:34.973030  558718 ssh_runner.go:149] Run: which crictl
	I0526 21:53:35.117837  558718 cache_images.go:106] "docker.io/kubernetesui/metrics-scraper:v1.0.4" needs transfer: "docker.io/kubernetesui/metrics-scraper:v1.0.4" does not exist at hash "86262685d9abb35698a4e03ed13f9ded5b97c6c85b466285e4f367e5232eeee4" in container runtime
	I0526 21:53:35.117897  558718 cri.go:205] Removing image: docker.io/kubernetesui/metrics-scraper:v1.0.4
	I0526 21:53:35.117948  558718 ssh_runner.go:149] Run: which crictl
	I0526 21:53:35.162850  558718 ssh_runner.go:149] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0526 21:53:35.162908  558718 ssh_runner.go:149] Run: sudo /usr/bin/crictl rmi docker.io/kubernetesui/metrics-scraper:v1.0.4
	I0526 21:53:35.162847  558718 cache_images.go:106] "k8s.gcr.io/coredns:1.3.1" needs transfer: "k8s.gcr.io/coredns:1.3.1" does not exist at hash "eb516548c180f8a6e0235034ccee2428027896af16a509786da13022fe95fe8c" in container runtime
	I0526 21:53:35.162985  558718 cri.go:205] Removing image: k8s.gcr.io/coredns:1.3.1
	I0526 21:53:35.163016  558718 ssh_runner.go:149] Run: which crictl
	I0526 21:53:35.196596  558718 ssh_runner.go:149] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/coredns:1.3.1
	I0526 21:53:35.196697  558718 cache_images.go:279] Loading image from: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5
	I0526 21:53:35.196781  558718 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0526 21:53:35.196911  558718 cache_images.go:279] Loading image from: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4
	I0526 21:53:35.196977  558718 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/metrics-scraper_v1.0.4
	I0526 21:53:35.294129  558718 ssh_runner.go:306] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0526 21:53:35.294141  558718 cache_images.go:279] Loading image from: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/k8s.gcr.io/coredns_1.3.1
	I0526 21:53:35.294168  558718 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (10569216 bytes)
	I0526 21:53:35.294240  558718 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_1.3.1
	I0526 21:53:35.294289  558718 ssh_runner.go:306] existence check for /var/lib/minikube/images/metrics-scraper_v1.0.4: stat -c "%s %y" /var/lib/minikube/images/metrics-scraper_v1.0.4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/metrics-scraper_v1.0.4': No such file or directory
	I0526 21:53:35.294303  558718 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4 --> /var/lib/minikube/images/metrics-scraper_v1.0.4 (17437696 bytes)
	I0526 21:53:35.306057  558718 ssh_runner.go:306] existence check for /var/lib/minikube/images/coredns_1.3.1: stat -c "%s %y" /var/lib/minikube/images/coredns_1.3.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/coredns_1.3.1': No such file or directory
	I0526 21:53:35.306100  558718 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/k8s.gcr.io/coredns_1.3.1 --> /var/lib/minikube/images/coredns_1.3.1 (12306944 bytes)
	I0526 21:53:35.441001  558718 containerd.go:260] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0526 21:53:35.441072  558718 ssh_runner.go:149] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I0526 21:53:36.133488  558718 image.go:200] found k8s.gcr.io/kube-scheduler:v1.14.0 locally: &{Image:0xc0003e20e0}
	I0526 21:53:36.133560  558718 ssh_runner.go:149] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-scheduler:v1.14.0 | grep 00638a24688b0ccaebac56206e4b7e6c529cb6807e1c30700e6f3489b59a4492"
	I0526 21:53:37.188313  558718 ssh_runner.go:189] Completed: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-scheduler:v1.14.0 | grep 00638a24688b0ccaebac56206e4b7e6c529cb6807e1c30700e6f3489b59a4492": (1.054719922s)
	I0526 21:53:37.188367  558718 cache_images.go:106] "k8s.gcr.io/kube-scheduler:v1.14.0" needs transfer: "k8s.gcr.io/kube-scheduler:v1.14.0" does not exist at hash "00638a24688b0ccaebac56206e4b7e6c529cb6807e1c30700e6f3489b59a4492" in container runtime
	I0526 21:53:37.188400  558718 cri.go:205] Removing image: k8s.gcr.io/kube-scheduler:v1.14.0
	I0526 21:53:37.188445  558718 ssh_runner.go:149] Run: which crictl
	I0526 21:53:37.188497  558718 ssh_runner.go:189] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5: (1.747407818s)
	I0526 21:53:37.188506  558718 cache_images.go:308] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0526 21:53:37.188523  558718 containerd.go:260] Loading image: /var/lib/minikube/images/coredns_1.3.1
	I0526 21:53:37.188549  558718 ssh_runner.go:149] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_1.3.1
	I0526 21:53:37.630077  558718 image.go:200] found index.docker.io/kubernetesui/dashboard:v2.1.0 locally: &{Image:0xc00133e100}
	I0526 21:53:37.630161  558718 ssh_runner.go:149] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep docker.io/kubernetesui/dashboard:v2.1.0 | grep 9a07b5b4bfac07e5cfc27f76c34516a3ad2fdfa3f683f375141fe662ef2e72db"
	I0526 21:53:37.640429  558718 image.go:200] found k8s.gcr.io/kube-proxy:v1.14.0 locally: &{Image:0xc00133e400}
	I0526 21:53:37.640493  558718 ssh_runner.go:149] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-proxy:v1.14.0 | grep 5cd54e388abafbc4e1feb1050d139d718e5544494ffa55118141d6cbe4681e9d"
	I0526 21:53:38.230400  558718 image.go:200] found k8s.gcr.io/kube-controller-manager:v1.14.0 locally: &{Image:0xc00133e100}
	I0526 21:53:38.230480  558718 ssh_runner.go:149] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-controller-manager:v1.14.0 | grep b95b1efa0436be0942d09e035a099542787d0a32d23cda704bd3e84760d3d150"
	I0526 21:53:38.348367  558718 ssh_runner.go:189] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_1.3.1: (1.159764351s)
	I0526 21:53:38.348408  558718 cache_images.go:308] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/k8s.gcr.io/coredns_1.3.1 from cache
	I0526 21:53:38.348448  558718 containerd.go:260] Loading image: /var/lib/minikube/images/metrics-scraper_v1.0.4
	I0526 21:53:38.348497  558718 ssh_runner.go:149] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/metrics-scraper_v1.0.4
	I0526 21:53:38.418491  558718 ssh_runner.go:189] Completed: which crictl: (1.230021649s)
	I0526 21:53:38.418602  558718 ssh_runner.go:149] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-scheduler:v1.14.0
	I0526 21:53:38.485250  558718 image.go:200] found k8s.gcr.io/etcd:3.3.10 locally: &{Image:0xc001136140}
	I0526 21:53:38.485320  558718 ssh_runner.go:149] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/etcd:3.3.10 | grep 2c4adeb21b4ff8ed3309d0e42b6b4ae39872399f7b37e0856e673b13c4aba13d"
	I0526 21:53:38.622055  558718 image.go:200] found k8s.gcr.io/kube-apiserver:v1.14.0 locally: &{Image:0xc0012cc420}
	I0526 21:53:38.622125  558718 ssh_runner.go:149] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-apiserver:v1.14.0 | grep ecf910f40d6e04e02f9da936745fdfdb455122df78e0ec3dc13c7a2eaa5191e6"
	I0526 21:53:40.360019  558718 ssh_runner.go:189] Completed: /bin/bash -c "sudo ctr -n=k8s.io images check | grep docker.io/kubernetesui/dashboard:v2.1.0 | grep 9a07b5b4bfac07e5cfc27f76c34516a3ad2fdfa3f683f375141fe662ef2e72db": (2.72982861s)
	I0526 21:53:40.360074  558718 cache_images.go:106] "docker.io/kubernetesui/dashboard:v2.1.0" needs transfer: "docker.io/kubernetesui/dashboard:v2.1.0" does not exist at hash "9a07b5b4bfac07e5cfc27f76c34516a3ad2fdfa3f683f375141fe662ef2e72db" in container runtime
	I0526 21:53:40.360090  558718 ssh_runner.go:189] Completed: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-proxy:v1.14.0 | grep 5cd54e388abafbc4e1feb1050d139d718e5544494ffa55118141d6cbe4681e9d": (2.719550917s)
	I0526 21:53:40.360110  558718 cri.go:205] Removing image: docker.io/kubernetesui/dashboard:v2.1.0
	I0526 21:53:40.360152  558718 cache_images.go:106] "k8s.gcr.io/kube-proxy:v1.14.0" needs transfer: "k8s.gcr.io/kube-proxy:v1.14.0" does not exist at hash "5cd54e388abafbc4e1feb1050d139d718e5544494ffa55118141d6cbe4681e9d" in container runtime
	I0526 21:53:40.360160  558718 ssh_runner.go:149] Run: which crictl
	I0526 21:53:40.360160  558718 ssh_runner.go:189] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/metrics-scraper_v1.0.4: (2.011644118s)
	I0526 21:53:40.360166  558718 ssh_runner.go:189] Completed: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-controller-manager:v1.14.0 | grep b95b1efa0436be0942d09e035a099542787d0a32d23cda704bd3e84760d3d150": (2.129656183s)
	I0526 21:53:40.360187  558718 cri.go:205] Removing image: k8s.gcr.io/kube-proxy:v1.14.0
	I0526 21:53:40.360219  558718 ssh_runner.go:189] Completed: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-scheduler:v1.14.0: (1.94160581s)
	I0526 21:53:40.360227  558718 cache_images.go:279] Loading image from: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.14.0
	I0526 21:53:40.360233  558718 ssh_runner.go:149] Run: which crictl
	I0526 21:53:40.360173  558718 cache_images.go:308] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4 from cache
	I0526 21:53:40.360252  558718 cache_images.go:106] "k8s.gcr.io/kube-controller-manager:v1.14.0" needs transfer: "k8s.gcr.io/kube-controller-manager:v1.14.0" does not exist at hash "b95b1efa0436be0942d09e035a099542787d0a32d23cda704bd3e84760d3d150" in container runtime
	I0526 21:53:40.360260  558718 ssh_runner.go:189] Completed: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-apiserver:v1.14.0 | grep ecf910f40d6e04e02f9da936745fdfdb455122df78e0ec3dc13c7a2eaa5191e6": (1.738127362s)
	I0526 21:53:40.360252  558718 ssh_runner.go:189] Completed: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/etcd:3.3.10 | grep 2c4adeb21b4ff8ed3309d0e42b6b4ae39872399f7b37e0856e673b13c4aba13d": (1.874920807s)
	I0526 21:53:40.360282  558718 cri.go:205] Removing image: k8s.gcr.io/kube-controller-manager:v1.14.0
	I0526 21:53:40.360284  558718 cache_images.go:106] "k8s.gcr.io/kube-apiserver:v1.14.0" needs transfer: "k8s.gcr.io/kube-apiserver:v1.14.0" does not exist at hash "ecf910f40d6e04e02f9da936745fdfdb455122df78e0ec3dc13c7a2eaa5191e6" in container runtime
	I0526 21:53:40.360287  558718 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.14.0
	I0526 21:53:40.360310  558718 cri.go:205] Removing image: k8s.gcr.io/kube-apiserver:v1.14.0
	I0526 21:53:40.360327  558718 cache_images.go:106] "k8s.gcr.io/etcd:3.3.10" needs transfer: "k8s.gcr.io/etcd:3.3.10" does not exist at hash "2c4adeb21b4ff8ed3309d0e42b6b4ae39872399f7b37e0856e673b13c4aba13d" in container runtime
	I0526 21:53:40.360338  558718 ssh_runner.go:149] Run: which crictl
	I0526 21:53:40.360338  558718 ssh_runner.go:149] Run: which crictl
	I0526 21:53:40.360353  558718 cri.go:205] Removing image: k8s.gcr.io/etcd:3.3.10
	I0526 21:53:40.360386  558718 ssh_runner.go:149] Run: which crictl
	I0526 21:53:40.372639  558718 ssh_runner.go:149] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-controller-manager:v1.14.0
	I0526 21:53:40.372749  558718 ssh_runner.go:149] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/etcd:3.3.10
	I0526 21:53:40.377111  558718 ssh_runner.go:149] Run: sudo /usr/bin/crictl rmi docker.io/kubernetesui/dashboard:v2.1.0
	I0526 21:53:40.377227  558718 ssh_runner.go:306] existence check for /var/lib/minikube/images/kube-scheduler_v1.14.0: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.14.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-scheduler_v1.14.0': No such file or directory
	I0526 21:53:40.377259  558718 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.14.0 --> /var/lib/minikube/images/kube-scheduler_v1.14.0 (29987328 bytes)
	I0526 21:53:40.377299  558718 ssh_runner.go:149] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-proxy:v1.14.0
	I0526 21:53:40.377259  558718 ssh_runner.go:149] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-apiserver:v1.14.0
	I0526 21:53:40.456509  558718 cache_images.go:279] Loading image from: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/k8s.gcr.io/etcd_3.3.10
	I0526 21:53:40.456571  558718 cache_images.go:279] Loading image from: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.14.0
	I0526 21:53:40.456637  558718 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.3.10
	I0526 21:53:40.456661  558718 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.14.0
	I0526 21:53:40.538302  558718 cache_images.go:279] Loading image from: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.14.0
	I0526 21:53:40.538346  558718 cache_images.go:279] Loading image from: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0
	I0526 21:53:40.538420  558718 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.14.0
	I0526 21:53:40.538438  558718 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/dashboard_v2.1.0
	I0526 21:53:40.538487  558718 cache_images.go:279] Loading image from: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.14.0
	I0526 21:53:40.538546  558718 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.14.0
	I0526 21:53:40.538567  558718 ssh_runner.go:306] existence check for /var/lib/minikube/images/etcd_3.3.10: stat -c "%s %y" /var/lib/minikube/images/etcd_3.3.10: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/etcd_3.3.10': No such file or directory
	I0526 21:53:40.538585  558718 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/k8s.gcr.io/etcd_3.3.10 --> /var/lib/minikube/images/etcd_3.3.10 (76164608 bytes)
	I0526 21:53:40.538639  558718 ssh_runner.go:306] existence check for /var/lib/minikube/images/kube-controller-manager_v1.14.0: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.14.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-controller-manager_v1.14.0': No such file or directory
	I0526 21:53:40.538663  558718 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.14.0 --> /var/lib/minikube/images/kube-controller-manager_v1.14.0 (47543808 bytes)
	I0526 21:53:40.582600  558718 ssh_runner.go:306] existence check for /var/lib/minikube/images/kube-apiserver_v1.14.0: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.14.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-apiserver_v1.14.0': No such file or directory
	I0526 21:53:40.582649  558718 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.14.0 --> /var/lib/minikube/images/kube-apiserver_v1.14.0 (49745920 bytes)
	I0526 21:53:40.582654  558718 ssh_runner.go:306] existence check for /var/lib/minikube/images/dashboard_v2.1.0: stat -c "%s %y" /var/lib/minikube/images/dashboard_v2.1.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/dashboard_v2.1.0': No such file or directory
	I0526 21:53:40.582701  558718 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0 --> /var/lib/minikube/images/dashboard_v2.1.0 (78078976 bytes)
	I0526 21:53:40.586161  558718 ssh_runner.go:306] existence check for /var/lib/minikube/images/kube-proxy_v1.14.0: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.14.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-proxy_v1.14.0': No such file or directory
	I0526 21:53:40.586190  558718 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.14.0 --> /var/lib/minikube/images/kube-proxy_v1.14.0 (30040064 bytes)
	I0526 21:53:40.878759  558718 containerd.go:260] Loading image: /var/lib/minikube/images/kube-scheduler_v1.14.0
	I0526 21:53:40.878830  558718 ssh_runner.go:149] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.14.0
	I0526 21:53:42.917477  558718 ssh_runner.go:189] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.14.0: (2.03861845s)
	I0526 21:53:42.917510  558718 cache_images.go:308] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.14.0 from cache
	I0526 21:53:42.917531  558718 containerd.go:260] Loading image: /var/lib/minikube/images/kube-proxy_v1.14.0
	I0526 21:53:42.917582  558718 ssh_runner.go:149] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.14.0
	I0526 21:53:43.809958  558718 cache_images.go:308] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.14.0 from cache
	I0526 21:53:43.810012  558718 containerd.go:260] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.14.0
	I0526 21:53:43.810068  558718 ssh_runner.go:149] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.14.0
	I0526 21:53:44.914131  558718 ssh_runner.go:189] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.14.0: (1.104030809s)
	I0526 21:53:44.914163  558718 cache_images.go:308] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.14.0 from cache
	I0526 21:53:44.914192  558718 containerd.go:260] Loading image: /var/lib/minikube/images/kube-apiserver_v1.14.0
	I0526 21:53:44.914252  558718 ssh_runner.go:149] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.14.0
	I0526 21:53:46.035717  558718 ssh_runner.go:189] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.14.0: (1.12143318s)
	I0526 21:53:46.035755  558718 cache_images.go:308] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.14.0 from cache
	I0526 21:53:46.035786  558718 containerd.go:260] Loading image: /var/lib/minikube/images/etcd_3.3.10
	I0526 21:53:46.035836  558718 ssh_runner.go:149] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.3.10
	I0526 21:53:47.989876  558718 ssh_runner.go:189] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.3.10: (1.954011053s)
	I0526 21:53:47.989907  558718 cache_images.go:308] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/k8s.gcr.io/etcd_3.3.10 from cache
	I0526 21:53:47.989938  558718 containerd.go:260] Loading image: /var/lib/minikube/images/dashboard_v2.1.0
	I0526 21:53:47.989985  558718 ssh_runner.go:149] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/dashboard_v2.1.0
	I0526 21:53:55.280602  558718 ssh_runner.go:189] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/dashboard_v2.1.0: (7.290590593s)
	I0526 21:53:55.280633  558718 cache_images.go:308] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0 from cache
	I0526 21:53:55.280677  558718 cache_images.go:113] Successfully loaded all cached images
	I0526 21:53:55.280687  558718 cache_images.go:82] LoadImages completed in 21.625790387s
	I0526 21:53:55.280750  558718 ssh_runner.go:149] Run: sudo crictl info
	I0526 21:53:55.302660  558718 cni.go:93] Creating CNI manager for ""
	I0526 21:53:55.302686  558718 cni.go:142] EnableDefaultCNI is true, recommending bridge
	I0526 21:53:55.302695  558718 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0526 21:53:55.302714  558718 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.39 APIServerPort:8443 KubernetesVersion:v1.14.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-20210526214750-510955 NodeName:stopped-upgrade-20210526214750-510955 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.39"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.50.39 CgroupDriver:c
groupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0526 21:53:55.302864  558718 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.39
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "stopped-upgrade-20210526214750-510955"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.39
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.39"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: stopped-upgrade-20210526214750-510955
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.50.39:2381
	kubernetesVersion: v1.14.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	
	I0526 21:53:55.302967  558718 kubeadm.go:909] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.14.0/kubelet --allow-privileged=true --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --client-ca-file=/var/lib/minikube/certs/ca.crt --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=stopped-upgrade-20210526214750-510955 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.39 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.14.0 ClusterName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:}
	I0526 21:53:55.303010  558718 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.14.0
	I0526 21:53:55.309990  558718 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.14.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.14.0': No such file or directory
	
	Initiating transfer...
	I0526 21:53:55.310049  558718 ssh_runner.go:149] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.14.0
	I0526 21:53:55.318052  558718 binary.go:65] Not caching binary, using https://storage.googleapis.com/kubernetes-release/release/v1.14.0/bin/linux/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.14.0/bin/linux/amd64/kubectl.sha1
	I0526 21:53:55.318076  558718 binary.go:65] Not caching binary, using https://storage.googleapis.com/kubernetes-release/release/v1.14.0/bin/linux/amd64/kubelet?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.14.0/bin/linux/amd64/kubelet.sha1
	I0526 21:53:55.318085  558718 binary.go:65] Not caching binary, using https://storage.googleapis.com/kubernetes-release/release/v1.14.0/bin/linux/amd64/kubeadm?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.14.0/bin/linux/amd64/kubeadm.sha1
	I0526 21:53:55.318148  558718 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0526 21:53:55.318150  558718 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.14.0/kubeadm
	I0526 21:53:55.318167  558718 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.14.0/kubectl
	I0526 21:53:55.326570  558718 ssh_runner.go:306] existence check for /var/lib/minikube/binaries/v1.14.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.14.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/binaries/v1.14.0/kubeadm': No such file or directory
	I0526 21:53:55.326604  558718 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/linux/v1.14.0/kubeadm --> /var/lib/minikube/binaries/v1.14.0/kubeadm (39574816 bytes)
	I0526 21:53:55.330482  558718 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.14.0/kubelet
	I0526 21:53:55.330525  558718 ssh_runner.go:306] existence check for /var/lib/minikube/binaries/v1.14.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.14.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/binaries/v1.14.0/kubectl': No such file or directory
	I0526 21:53:55.330552  558718 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/linux/v1.14.0/kubectl --> /var/lib/minikube/binaries/v1.14.0/kubectl (43103040 bytes)
	I0526 21:53:55.354433  558718 ssh_runner.go:306] existence check for /var/lib/minikube/binaries/v1.14.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.14.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/binaries/v1.14.0/kubelet': No such file or directory
	I0526 21:53:55.354467  558718 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/linux/v1.14.0/kubelet --> /var/lib/minikube/binaries/v1.14.0/kubelet (127850432 bytes)
	I0526 21:53:57.801771  558718 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0526 21:53:57.811028  558718 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (624 bytes)
	I0526 21:53:57.828322  558718 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0526 21:53:57.840436  558718 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1978 bytes)
	I0526 21:53:57.854547  558718 ssh_runner.go:149] Run: grep 192.168.50.39	control-plane.minikube.internal$ /etc/hosts
	I0526 21:53:57.860104  558718 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.39	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0526 21:53:57.873276  558718 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles for IP: 192.168.50.39
	I0526 21:53:57.873334  558718 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/ca.key
	I0526 21:53:57.873367  558718 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/proxy-client-ca.key
	I0526 21:53:57.873431  558718 certs.go:290] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/client.key
	I0526 21:53:57.873466  558718 certs.go:294] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/apiserver.key.893b17cb
	I0526 21:53:57.873483  558718 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/apiserver.crt.893b17cb with IP's: [192.168.50.39 10.96.0.1 127.0.0.1 10.0.0.1]
	I0526 21:53:58.248937  558718 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/apiserver.crt.893b17cb ...
	I0526 21:53:58.248970  558718 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/apiserver.crt.893b17cb: {Name:mka3db9016749a57065b1197646fc0f793ab5ba5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0526 21:53:58.249164  558718 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/apiserver.key.893b17cb ...
	I0526 21:53:58.249177  558718 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/apiserver.key.893b17cb: {Name:mkd08ea22450e4b98c309b37346adbba35556265 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0526 21:53:58.249266  558718 certs.go:305] copying /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/apiserver.crt.893b17cb -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/apiserver.crt
	I0526 21:53:58.249385  558718 certs.go:309] copying /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/apiserver.key.893b17cb -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/apiserver.key
	I0526 21:53:58.249469  558718 certs.go:290] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/proxy-client.key
	I0526 21:53:58.249567  558718 certs.go:369] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/510955.pem (1338 bytes)
	W0526 21:53:58.249605  558718 certs.go:365] ignoring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/510955_empty.pem, impossibly tiny 0 bytes
	I0526 21:53:58.249618  558718 certs.go:369] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/ca-key.pem (1675 bytes)
	I0526 21:53:58.249650  558718 certs.go:369] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/ca.pem (1078 bytes)
	I0526 21:53:58.249676  558718 certs.go:369] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/cert.pem (1123 bytes)
	I0526 21:53:58.249704  558718 certs.go:369] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/key.pem (1679 bytes)
	I0526 21:53:58.250648  558718 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0526 21:53:58.264992  558718 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0526 21:53:58.277530  558718 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0526 21:53:58.290484  558718 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0526 21:53:58.303703  558718 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0526 21:53:58.316611  558718 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0526 21:53:58.329320  558718 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0526 21:53:58.343239  558718 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0526 21:53:58.358149  558718 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/certs/510955.pem --> /usr/share/ca-certificates/510955.pem (1338 bytes)
	I0526 21:53:58.371980  558718 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0526 21:53:58.385480  558718 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (774 bytes)
	I0526 21:53:58.395794  558718 ssh_runner.go:149] Run: openssl version
	W0526 21:53:58.398672  558718 certs.go:431] OpenSSL not found. Please recreate the cluster with the latest minikube ISO.
	I0526 21:53:58.398721  558718 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/510955.pem && ln -fs /usr/share/ca-certificates/510955.pem /etc/ssl/certs/510955.pem"
	I0526 21:53:58.405474  558718 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0526 21:53:58.412028  558718 kubeadm.go:390] StartCluster: {Name:stopped-upgrade-20210526214750-510955 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.20.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.22-1620785771-11384@sha256:f5844fe35994179bbad8dda27d4912304a2fedccdf0bf93ce8b2ec2b3b83af1c Memory:2200 CPUs:2 DiskSize:0 VMDriver: Driver: HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR: HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork: KVMQemuURI: KVMGPU:false KVMHidden:false KVMNUMACount:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot: UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:false HostOnlyNicType: NatNicType: SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.14.0 ClusterName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain
:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.50.39 Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	I0526 21:53:58.412110  558718 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0526 21:53:58.412149  558718 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0526 21:53:58.431614  558718 cri.go:76] found id: "27458730eafe3b5fe7cb39cd59c38b9b861eaca719fb3bc117150d0f8bb83b0f"
	I0526 21:53:58.431635  558718 cri.go:76] found id: "cd346ba7690ca1304c77d0d7a4ae5340009439433fbe78a692264a344ce9c871"
	I0526 21:53:58.431642  558718 cri.go:76] found id: "759c61c562ff92f22f1bb743199d35eca4911e2fe067db592a019c50026b229b"
	I0526 21:53:58.431647  558718 cri.go:76] found id: "1d3d01288fa79f3a18877681ebca7cad31b2394d8838b0565cf1e00d7fb4e77d"
	I0526 21:53:58.431652  558718 cri.go:76] found id: "ce5f17128ded6e8062af3599423513ceafdc20738b0fa3b41a9861289dd0adfd"
	I0526 21:53:58.431659  558718 cri.go:76] found id: "b21cfac887bbe28bf0dafacbeef52f4ef58489aaf01372d637e97fbfdcab1aa7"
	I0526 21:53:58.431664  558718 cri.go:76] found id: "cc07fbf67984148150702e67f939ba110612955cb4d39f0db037e6b3602d8299"
	I0526 21:53:58.431670  558718 cri.go:76] found id: "7f493440e39b7a703b4399e72e4c097aab649a272bf81e92af01540339cbfadb"
	I0526 21:53:58.431682  558718 cri.go:76] found id: "6971d16192bc15a0788711dfc3a5f6222cd2848857ff3ca918e94373997eb9f7"
	I0526 21:53:58.431693  558718 cri.go:76] found id: ""
	I0526 21:53:58.431726  558718 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0526 21:53:58.444351  558718 cri.go:103] JSON = null
	W0526 21:53:58.444396  558718 kubeadm.go:397] unpause failed: list paused: list returned 0 containers, but ps returned 9
	I0526 21:53:58.444446  558718 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0526 21:53:58.450680  558718 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0526 21:53:58.456833  558718 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0526 21:53:58.462210  558718 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0526 21:53:58.462267  558718 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap"
	I0526 21:53:58.873388  558718 out.go:197]   - Generating certificates and keys ...
	I0526 21:54:00.497991  558718 out.go:197]   - Booting up control plane ...
	W0526 21:58:00.517424  558718 out.go:235] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.14.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.14.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	
	I0526 21:58:00.517496  558718 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0526 21:58:00.968845  558718 ssh_runner.go:149] Run: sudo systemctl stop -f kubelet
	I0526 21:58:00.980894  558718 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0526 21:58:00.980975  558718 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0526 21:58:00.999992  558718 cri.go:76] found id: "cd346ba7690ca1304c77d0d7a4ae5340009439433fbe78a692264a344ce9c871"
	I0526 21:58:01.000017  558718 cri.go:76] found id: "759c61c562ff92f22f1bb743199d35eca4911e2fe067db592a019c50026b229b"
	I0526 21:58:01.000024  558718 cri.go:76] found id: ""
	W0526 21:58:01.000033  558718 kubeadm.go:840] found 2 kube-system containers to stop
	I0526 21:58:01.000040  558718 cri.go:221] Stopping containers: [cd346ba7690ca1304c77d0d7a4ae5340009439433fbe78a692264a344ce9c871 759c61c562ff92f22f1bb743199d35eca4911e2fe067db592a019c50026b229b]
	I0526 21:58:01.000085  558718 ssh_runner.go:149] Run: which crictl
	I0526 21:58:01.003873  558718 ssh_runner.go:149] Run: sudo /usr/bin/crictl stop cd346ba7690ca1304c77d0d7a4ae5340009439433fbe78a692264a344ce9c871 759c61c562ff92f22f1bb743199d35eca4911e2fe067db592a019c50026b229b
	I0526 21:58:01.021280  558718 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0526 21:58:01.032093  558718 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0526 21:58:01.032125  558718 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap"
	I0526 22:02:02.251373  558718 out.go:197]   - Generating certificates and keys ...
	I0526 22:02:02.254178  558718 out.go:197]   - Booting up control plane ...
	I0526 22:02:02.255868  558718 kubeadm.go:392] StartCluster complete in 8m3.84384432s
	I0526 22:02:02.255915  558718 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0526 22:02:02.255981  558718 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0526 22:02:02.276615  558718 cri.go:76] found id: ""
	I0526 22:02:02.276633  558718 logs.go:270] 0 containers: []
	W0526 22:02:02.276638  558718 logs.go:272] No container was found matching "kube-apiserver"
	I0526 22:02:02.276645  558718 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0526 22:02:02.276686  558718 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=etcd
	I0526 22:02:02.291401  558718 cri.go:76] found id: ""
	I0526 22:02:02.291415  558718 logs.go:270] 0 containers: []
	W0526 22:02:02.291420  558718 logs.go:272] No container was found matching "etcd"
	I0526 22:02:02.291425  558718 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0526 22:02:02.291467  558718 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=coredns
	I0526 22:02:02.309694  558718 cri.go:76] found id: "cd346ba7690ca1304c77d0d7a4ae5340009439433fbe78a692264a344ce9c871"
	I0526 22:02:02.309710  558718 cri.go:76] found id: "759c61c562ff92f22f1bb743199d35eca4911e2fe067db592a019c50026b229b"
	I0526 22:02:02.309716  558718 cri.go:76] found id: ""
	I0526 22:02:02.309722  558718 logs.go:270] 2 containers: [cd346ba7690ca1304c77d0d7a4ae5340009439433fbe78a692264a344ce9c871 759c61c562ff92f22f1bb743199d35eca4911e2fe067db592a019c50026b229b]
	I0526 22:02:02.309765  558718 ssh_runner.go:149] Run: which crictl
	I0526 22:02:02.314112  558718 ssh_runner.go:149] Run: which crictl
	I0526 22:02:02.317979  558718 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0526 22:02:02.318030  558718 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0526 22:02:02.333662  558718 cri.go:76] found id: ""
	I0526 22:02:02.333680  558718 logs.go:270] 0 containers: []
	W0526 22:02:02.333686  558718 logs.go:272] No container was found matching "kube-scheduler"
	I0526 22:02:02.333692  558718 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0526 22:02:02.333734  558718 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0526 22:02:02.353012  558718 cri.go:76] found id: ""
	I0526 22:02:02.353029  558718 logs.go:270] 0 containers: []
	W0526 22:02:02.353034  558718 logs.go:272] No container was found matching "kube-proxy"
	I0526 22:02:02.353050  558718 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0526 22:02:02.353095  558718 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0526 22:02:02.375407  558718 cri.go:76] found id: ""
	I0526 22:02:02.375428  558718 logs.go:270] 0 containers: []
	W0526 22:02:02.375435  558718 logs.go:272] No container was found matching "kubernetes-dashboard"
	I0526 22:02:02.375444  558718 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0526 22:02:02.375490  558718 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0526 22:02:02.393728  558718 cri.go:76] found id: ""
	I0526 22:02:02.393746  558718 logs.go:270] 0 containers: []
	W0526 22:02:02.393752  558718 logs.go:272] No container was found matching "storage-provisioner"
	I0526 22:02:02.393758  558718 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0526 22:02:02.393799  558718 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0526 22:02:02.409358  558718 cri.go:76] found id: ""
	I0526 22:02:02.409375  558718 logs.go:270] 0 containers: []
	W0526 22:02:02.409382  558718 logs.go:272] No container was found matching "kube-controller-manager"
	I0526 22:02:02.409392  558718 logs.go:123] Gathering logs for coredns [759c61c562ff92f22f1bb743199d35eca4911e2fe067db592a019c50026b229b] ...
	I0526 22:02:02.409408  558718 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 759c61c562ff92f22f1bb743199d35eca4911e2fe067db592a019c50026b229b"
	I0526 22:02:02.428749  558718 logs.go:123] Gathering logs for containerd ...
	I0526 22:02:02.428768  558718 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0526 22:02:02.581561  558718 logs.go:123] Gathering logs for container status ...
	I0526 22:02:02.581587  558718 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0526 22:02:02.602566  558718 logs.go:123] Gathering logs for kubelet ...
	I0526 22:02:02.602589  558718 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0526 22:02:02.678137  558718 logs.go:123] Gathering logs for dmesg ...
	I0526 22:02:02.678160  558718 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0526 22:02:02.687497  558718 logs.go:123] Gathering logs for describe nodes ...
	I0526 22:02:02.687517  558718 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.14.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0526 22:02:02.769265  558718 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.14.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.14.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0526 22:02:02.769315  558718 logs.go:123] Gathering logs for coredns [cd346ba7690ca1304c77d0d7a4ae5340009439433fbe78a692264a344ce9c871] ...
	I0526 22:02:02.769339  558718 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cd346ba7690ca1304c77d0d7a4ae5340009439433fbe78a692264a344ce9c871"
	W0526 22:02:02.790260  558718 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.14.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	W0526 22:02:02.790290  558718 out.go:235] * 
	* 
	W0526 22:02:02.790454  558718 out.go:235] X Error starting cluster: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.14.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.14.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	
	W0526 22:02:02.790475  558718 out.go:235] * 
	* 
	W0526 22:02:02.792191  558718 out.go:235] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	W0526 22:02:02.792205  558718 out.go:235] │                                                                                                                                                             │
	│                                                                                                                                                             │
	W0526 22:02:02.792210  558718 out.go:235] │    * If the above advice does not help, please let us know:                                                                                                 │
	│    * If the above advice does not help, please let us know:                                                                                                 │
	W0526 22:02:02.792214  558718 out.go:235] │      https://github.com/kubernetes/minikube/issues/new/choose                                                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                                                               │
	W0526 22:02:02.792220  558718 out.go:235] │                                                                                                                                                             │
	│                                                                                                                                                             │
	W0526 22:02:02.792224  558718 out.go:235] │    * Please attach the following file to the GitHub issue:                                                                                                  │
	│    * Please attach the following file to the GitHub issue:                                                                                                  │
	W0526 22:02:02.792228  558718 out.go:235] │    * - /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/logs/lastStart.txt    │
	│    * - /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/logs/lastStart.txt    │
	W0526 22:02:02.792236  558718 out.go:235] │                                                                                                                                                             │
	│                                                                                                                                                             │
	W0526 22:02:02.792242  558718 out.go:235] ╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	W0526 22:02:02.792249  558718 out.go:235] 
	
	I0526 22:02:02.796299  558718 out.go:170] 
	W0526 22:02:02.796475  558718 out.go:235] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.14.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.14.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	
	W0526 22:02:02.796565  558718 out.go:235] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0526 22:02:02.796636  558718 out.go:235] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0526 22:02:02.798666  558718 out.go:170] 

                                                
                                                
** /stderr **
version_upgrade_test.go:206: upgrade from v1.0.0 to HEAD failed: out/minikube-linux-amd64 start -p stopped-upgrade-20210526214750-510955 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: exit status 109
panic.go:613: *** TestStoppedBinaryUpgrade FAILED at 2021-05-26 22:02:02.843505008 +0000 UTC m=+4957.961053520
helpers_test.go:218: -----------------------post-mortem--------------------------------
helpers_test.go:235: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p stopped-upgrade-20210526214750-510955 -n stopped-upgrade-20210526214750-510955
E0526 22:02:02.958804  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/auto-20210526215016-510955/client.crt: no such file or directory
helpers_test.go:235: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p stopped-upgrade-20210526214750-510955 -n stopped-upgrade-20210526214750-510955: exit status 3 (189.955742ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0526 22:02:03.031513  561672 status.go:374] failed to get storage capacity of /var: strconv.Atoi: parsing "-": invalid syntax
	E0526 22:02:03.031543  561672 status.go:247] status error: strconv.Atoi: parsing "-": invalid syntax

                                                
                                                
** /stderr **
helpers_test.go:235: status error: exit status 3 (may be ok)
helpers_test.go:237: "stopped-upgrade-20210526214750-510955" host is not running, skipping log retrieval (state="Error")
helpers_test.go:171: Cleaning up "stopped-upgrade-20210526214750-510955" profile ...
helpers_test.go:174: (dbg) Run:  out/minikube-linux-amd64 delete -p stopped-upgrade-20210526214750-510955
helpers_test.go:174: (dbg) Done: out/minikube-linux-amd64 delete -p stopped-upgrade-20210526214750-510955: (1.020784895s)
--- FAIL: TestStoppedBinaryUpgrade (853.38s)

                                                
                                    

Test pass (220/260)

Order passed test Duration
3 TestDownloadOnly/v1.14.0/json-events 14.04
4 TestDownloadOnly/v1.14.0/preload-exists 0
6 TestDownloadOnly/v1.14.0/binaries 0
8 TestDownloadOnly/v1.14.0/LogsDuration 0.07
10 TestDownloadOnly/v1.20.2/json-events 10.01
11 TestDownloadOnly/v1.20.2/preload-exists 0
13 TestDownloadOnly/v1.20.2/binaries 0
15 TestDownloadOnly/v1.20.2/LogsDuration 0.07
17 TestDownloadOnly/v1.22.0-alpha.1/json-events 18.1
18 TestDownloadOnly/v1.22.0-alpha.1/preload-exists 0
20 TestDownloadOnly/v1.22.0-alpha.1/binaries 0
22 TestDownloadOnly/v1.22.0-alpha.1/LogsDuration 4.95
23 TestDownloadOnly/DeleteAll 0.23
24 TestDownloadOnly/DeleteAlwaysSucceeds 0.22
26 TestOffline 232.74
31 TestAddons/parallel/MetricsServer 5.68
36 TestCertOptions 73.36
44 TestErrorSpam/start 25.44
45 TestErrorSpam/status 25.46
46 TestErrorSpam/pause 2.47
47 TestErrorSpam/unpause 0.64
48 TestErrorSpam/stop 92.48
51 TestFunctional/serial/CopySyncFile 0
52 TestFunctional/serial/StartWithProxy 168.92
53 TestFunctional/serial/AuditLog 0
54 TestFunctional/serial/SoftStart 4.94
55 TestFunctional/serial/KubeContext 0.04
56 TestFunctional/serial/KubectlGetPods 0.19
59 TestFunctional/serial/CacheCmd/cache/add_remote 3.79
60 TestFunctional/serial/CacheCmd/cache/add_local 1.32
61 TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 0.06
62 TestFunctional/serial/CacheCmd/cache/list 0.06
63 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.23
64 TestFunctional/serial/CacheCmd/cache/cache_reload 2.69
65 TestFunctional/serial/CacheCmd/cache/delete 0.12
66 TestFunctional/serial/MinikubeKubectlCmd 0.12
67 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
68 TestFunctional/serial/ExtraConfig 109.07
69 TestFunctional/serial/ComponentHealth 0.07
71 TestFunctional/parallel/ConfigCmd 0.45
72 TestFunctional/parallel/DashboardCmd 5.42
73 TestFunctional/parallel/DryRun 0.35
74 TestFunctional/parallel/StatusCmd 1.17
75 TestFunctional/parallel/LogsCmd 1.85
76 TestFunctional/parallel/LogsFileCmd 1.69
77 TestFunctional/parallel/MountCmd 5.94
79 TestFunctional/parallel/ServiceCmd 15.8
80 TestFunctional/parallel/AddonsCmd 0.31
81 TestFunctional/parallel/PersistentVolumeClaim 33.95
83 TestFunctional/parallel/SSHCmd 0.52
84 TestFunctional/parallel/CpCmd 0.48
85 TestFunctional/parallel/MySQL 28.74
86 TestFunctional/parallel/FileSync 0.3
87 TestFunctional/parallel/CertSync 1
91 TestFunctional/parallel/NodeLabels 0.06
92 TestFunctional/parallel/LoadImage 2.47
93 TestFunctional/parallel/RemoveImage 3.89
94 TestFunctional/parallel/BuildImage 5.14
95 TestFunctional/parallel/ListImages 0.28
97 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
99 TestFunctional/parallel/ProfileCmd/profile_not_create 0.4
100 TestFunctional/parallel/ProfileCmd/profile_list 0.35
101 TestFunctional/parallel/ProfileCmd/profile_json_output 0.37
102 TestFunctional/parallel/UpdateContextCmd/no_changes 0.1
103 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.1
104 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.11
105 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
106 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
110 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
111 TestFunctional/delete_busybox_image 0.08
112 TestFunctional/delete_my-image_image 0.03
113 TestFunctional/delete_minikube_cached_images 0.04
117 TestJSONOutput/start/Audit 0
119 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
120 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
122 TestJSONOutput/pause/Audit 0
124 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
125 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
127 TestJSONOutput/unpause/Audit 0
129 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
130 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
132 TestJSONOutput/stop/Audit 0
136 TestErrorJSONOutput 0.33
139 TestMainNoArgs 0.06
142 TestMultiNode/serial/FreshStart2Nodes 219.94
143 TestMultiNode/serial/DeployApp2Nodes 4.83
144 TestMultiNode/serial/PingHostFrom2Pods 1
145 TestMultiNode/serial/AddNode 61.87
146 TestMultiNode/serial/ProfileList 0.25
147 TestMultiNode/serial/CopyFile 1.83
150 TestMultiNode/serial/DeleteNode 1.59
151 TestMultiNode/serial/StopMultiNode 184.39
152 TestMultiNode/serial/RestartMultiNode 319.95
153 TestMultiNode/serial/ValidateNameConflict 68.57
159 TestDebPackageInstall/install_amd64_debian:sid/minikube 0
160 TestDebPackageInstall/install_amd64_debian:sid/kvm2-driver 11
162 TestDebPackageInstall/install_amd64_debian:latest/minikube 0
163 TestDebPackageInstall/install_amd64_debian:latest/kvm2-driver 10.19
165 TestDebPackageInstall/install_amd64_debian:10/minikube 0
166 TestDebPackageInstall/install_amd64_debian:10/kvm2-driver 9.61
168 TestDebPackageInstall/install_amd64_debian:9/minikube 0
169 TestDebPackageInstall/install_amd64_debian:9/kvm2-driver 8.47
171 TestDebPackageInstall/install_amd64_ubuntu:latest/minikube 0
172 TestDebPackageInstall/install_amd64_ubuntu:latest/kvm2-driver 14.57
174 TestDebPackageInstall/install_amd64_ubuntu:20.10/minikube 0
175 TestDebPackageInstall/install_amd64_ubuntu:20.10/kvm2-driver 13.84
177 TestDebPackageInstall/install_amd64_ubuntu:20.04/minikube 0
178 TestDebPackageInstall/install_amd64_ubuntu:20.04/kvm2-driver 14
180 TestDebPackageInstall/install_amd64_ubuntu:18.04/minikube 0
181 TestDebPackageInstall/install_amd64_ubuntu:18.04/kvm2-driver 12.98
182 TestPreload 185.84
190 TestKubernetesUpgrade 288.29
193 TestPause/serial/Start 202.08
201 TestNetworkPlugins/group/false 0.43
205 TestPause/serial/SecondStartNoReconfiguration 5.17
206 TestPause/serial/Pause 0.74
207 TestPause/serial/VerifyStatus 0.26
208 TestPause/serial/Unpause 0.93
209 TestPause/serial/PauseAgain 5.65
210 TestPause/serial/DeletePaused 1.05
211 TestPause/serial/VerifyDeletedResources 0.27
219 TestNetworkPlugins/group/auto/Start 167.22
220 TestNetworkPlugins/group/auto/KubeletFlags 0.23
221 TestNetworkPlugins/group/auto/NetCatPod 9.56
222 TestNetworkPlugins/group/auto/DNS 0.25
223 TestNetworkPlugins/group/auto/Localhost 0.19
224 TestNetworkPlugins/group/auto/HairPin 0.2
225 TestNetworkPlugins/group/cilium/Start 164.53
226 TestNetworkPlugins/group/calico/Start 158.8
227 TestNetworkPlugins/group/cilium/ControllerPod 5.03
228 TestNetworkPlugins/group/cilium/KubeletFlags 0.22
229 TestNetworkPlugins/group/cilium/NetCatPod 10.55
230 TestNetworkPlugins/group/cilium/DNS 0.34
231 TestNetworkPlugins/group/cilium/Localhost 0.24
232 TestNetworkPlugins/group/cilium/HairPin 0.21
233 TestNetworkPlugins/group/custom-weave/Start 172.52
234 TestNetworkPlugins/group/calico/ControllerPod 5.03
235 TestNetworkPlugins/group/calico/KubeletFlags 0.22
236 TestNetworkPlugins/group/calico/NetCatPod 13.58
237 TestNetworkPlugins/group/calico/DNS 0.27
238 TestNetworkPlugins/group/calico/Localhost 0.21
239 TestNetworkPlugins/group/calico/HairPin 0.21
240 TestNetworkPlugins/group/kindnet/Start 178.07
241 TestNetworkPlugins/group/flannel/Start 174.55
242 TestNetworkPlugins/group/custom-weave/KubeletFlags 0.26
243 TestNetworkPlugins/group/custom-weave/NetCatPod 12.88
244 TestNetworkPlugins/group/enable-default-cni/Start 182.88
245 TestNetworkPlugins/group/kindnet/ControllerPod 5.02
246 TestNetworkPlugins/group/kindnet/KubeletFlags 0.24
247 TestNetworkPlugins/group/kindnet/NetCatPod 9.54
248 TestNetworkPlugins/group/kindnet/DNS 0.26
249 TestNetworkPlugins/group/kindnet/Localhost 0.26
250 TestNetworkPlugins/group/kindnet/HairPin 0.22
251 TestNetworkPlugins/group/bridge/Start 163.87
252 TestNetworkPlugins/group/flannel/ControllerPod 5.03
253 TestNetworkPlugins/group/flannel/KubeletFlags 0.23
254 TestNetworkPlugins/group/flannel/NetCatPod 9.58
255 TestNetworkPlugins/group/flannel/DNS 0.26
256 TestNetworkPlugins/group/flannel/Localhost 0.23
257 TestNetworkPlugins/group/flannel/HairPin 0.22
259 TestStartStop/group/old-k8s-version/serial/FirstStart 152.22
261 TestStartStop/group/no-preload/serial/FirstStart 181.19
262 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.29
263 TestNetworkPlugins/group/enable-default-cni/NetCatPod 18.66
264 TestNetworkPlugins/group/enable-default-cni/DNS 0.31
265 TestNetworkPlugins/group/enable-default-cni/Localhost 0.26
266 TestNetworkPlugins/group/enable-default-cni/HairPin 0.29
268 TestStartStop/group/embed-certs/serial/FirstStart 188.32
269 TestNetworkPlugins/group/bridge/KubeletFlags 1.62
270 TestNetworkPlugins/group/bridge/NetCatPod 11.17
271 TestNetworkPlugins/group/bridge/DNS 0.24
272 TestNetworkPlugins/group/bridge/Localhost 0.19
273 TestNetworkPlugins/group/bridge/HairPin 0.19
275 TestStartStop/group/default-k8s-different-port/serial/FirstStart 148.63
276 TestStartStop/group/old-k8s-version/serial/DeployApp 8.71
277 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.01
278 TestStartStop/group/old-k8s-version/serial/Stop 92.51
279 TestStartStop/group/no-preload/serial/DeployApp 9.61
280 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.04
281 TestStartStop/group/no-preload/serial/Stop 93.51
282 TestStartStop/group/embed-certs/serial/DeployApp 8.65
283 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.81
284 TestStartStop/group/embed-certs/serial/Stop 92.51
285 TestStartStop/group/default-k8s-different-port/serial/DeployApp 7.59
286 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.17
287 TestStartStop/group/old-k8s-version/serial/SecondStart 441.52
288 TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive 0.82
289 TestStartStop/group/default-k8s-different-port/serial/Stop 92.51
290 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.16
291 TestStartStop/group/no-preload/serial/SecondStart 327.91
292 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.17
293 TestStartStop/group/embed-certs/serial/SecondStart 455.01
294 TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop 0.19
295 TestStartStop/group/default-k8s-different-port/serial/SecondStart 529.56
296 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 12.03
297 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.11
298 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.26
299 TestStartStop/group/no-preload/serial/Pause 2.68
301 TestStartStop/group/newest-cni/serial/FirstStart 81.03
302 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.03
303 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.1
304 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.24
305 TestStartStop/group/old-k8s-version/serial/Pause 2.69
306 TestStartStop/group/newest-cni/serial/DeployApp 0
307 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.98
308 TestStartStop/group/newest-cni/serial/Stop 92.52
309 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 5.02
310 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.09
311 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.23
312 TestStartStop/group/embed-certs/serial/Pause 2.46
313 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.16
314 TestStartStop/group/newest-cni/serial/SecondStart 117.26
315 TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop 5.02
316 TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop 5.24
317 TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages 0.25
318 TestStartStop/group/default-k8s-different-port/serial/Pause 2.55
319 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
320 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
321 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.24
322 TestStartStop/group/newest-cni/serial/Pause 2.05
x
+
TestDownloadOnly/v1.14.0/json-events (14.04s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-20210526203924-510955 --force --alsologtostderr --kubernetes-version=v1.14.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-20210526203924-510955 --force --alsologtostderr --kubernetes-version=v1.14.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (14.035803961s)
--- PASS: TestDownloadOnly/v1.14.0/json-events (14.04s)

                                                
                                    
x
+
TestDownloadOnly/v1.14.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/preload-exists
--- PASS: TestDownloadOnly/v1.14.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.14.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/binaries
--- PASS: TestDownloadOnly/v1.14.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.14.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/LogsDuration
aaa_download_only_test.go:166: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-20210526203924-510955
aaa_download_only_test.go:166: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-20210526203924-510955: exit status 85 (72.759795ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/05/26 20:39:24
	Running on machine: debian-jenkins-agent-4
	Binary: Built with gc go1.16.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0526 20:39:24.998517  510968 out.go:291] Setting OutFile to fd 1 ...
	I0526 20:39:24.998590  510968 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0526 20:39:24.998604  510968 out.go:304] Setting ErrFile to fd 2...
	I0526 20:39:24.998608  510968 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0526 20:39:24.998695  510968 root.go:316] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/bin
	W0526 20:39:24.998808  510968 root.go:291] Error reading config file at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/config/config.json: open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/config/config.json: no such file or directory
	I0526 20:39:24.999059  510968 out.go:298] Setting JSON to true
	I0526 20:39:25.033439  510968 start.go:110] hostinfo: {"hostname":"debian-jenkins-agent-4","uptime":15727,"bootTime":1622045838,"procs":139,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-15-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0526 20:39:25.033528  510968 start.go:120] virtualization: kvm guest
	I0526 20:39:25.036584  510968 notify.go:169] Checking for updates...
	I0526 20:39:25.038690  510968 driver.go:331] Setting default libvirt URI to qemu:///system
	I0526 20:39:25.067735  510968 start.go:278] selected driver: kvm2
	I0526 20:39:25.067789  510968 start.go:751] validating driver "kvm2" against <nil>
	I0526 20:39:25.067885  510968 install.go:51] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0526 20:39:25.068062  510968 install.go:116] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0526 20:39:25.080337  510968 install.go:136] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.20.0
	I0526 20:39:25.080394  510968 start_flags.go:259] no existing cluster config was found, will generate one from the flags 
	I0526 20:39:25.081413  510968 start_flags.go:311] Using suggested 6000MB memory alloc based on sys=32179MB, container=0MB
	I0526 20:39:25.081571  510968 start_flags.go:638] Wait components to verify : map[apiserver:true system_pods:true]
	I0526 20:39:25.081644  510968 cni.go:93] Creating CNI manager for ""
	I0526 20:39:25.081656  510968 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	I0526 20:39:25.081667  510968 start_flags.go:268] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0526 20:39:25.081683  510968 start_flags.go:273] config:
	{Name:download-only-20210526203924-510955 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.22-1620785771-11384@sha256:f5844fe35994179bbad8dda27d4912304a2fedccdf0bf93ce8b2ec2b3b83af1c Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.14.0 ClusterName:download-only-20210526203924-510955 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	I0526 20:39:25.082051  510968 iso.go:123] acquiring lock: {Name:mkae6243686e006cb5174618a31875b12ffbed81 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0526 20:39:25.084124  510968 download.go:86] Downloading: https://storage.googleapis.com/minikube/iso/minikube-v1.20.0.iso?checksum=file:https://storage.googleapis.com/minikube/iso/minikube-v1.20.0.iso.sha256 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/iso/minikube-v1.20.0.iso
	I0526 20:39:26.823552  510968 preload.go:98] Checking if preload exists for k8s version v1.14.0 and runtime containerd
	I0526 20:39:26.856207  510968 preload.go:123] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.14.0-containerd-overlay2-amd64.tar.lz4
	I0526 20:39:26.856257  510968 cache.go:54] Caching tarball of preloaded images
	I0526 20:39:26.856438  510968 preload.go:98] Checking if preload exists for k8s version v1.14.0 and runtime containerd
	I0526 20:39:26.894575  510968 preload.go:123] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.14.0-containerd-overlay2-amd64.tar.lz4
	I0526 20:39:26.896912  510968 preload.go:205] getting checksum for preloaded-images-k8s-v11-v1.14.0-containerd-overlay2-amd64.tar.lz4 ...
	I0526 20:39:26.930322  510968 download.go:86] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.14.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:8891d3d5a9795ff90493434142d1724b -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.14.0-containerd-overlay2-amd64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20210526203924-510955"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:167: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.14.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.2/json-events (10.01s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.2/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-20210526203924-510955 --force --alsologtostderr --kubernetes-version=v1.20.2 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-20210526203924-510955 --force --alsologtostderr --kubernetes-version=v1.20.2 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (10.010645527s)
--- PASS: TestDownloadOnly/v1.20.2/json-events (10.01s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.2/preload-exists
--- PASS: TestDownloadOnly/v1.20.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.2/binaries
--- PASS: TestDownloadOnly/v1.20.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.2/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.2/LogsDuration
aaa_download_only_test.go:166: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-20210526203924-510955
aaa_download_only_test.go:166: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-20210526203924-510955: exit status 85 (73.651922ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/05/26 20:39:39
	Running on machine: debian-jenkins-agent-4
	Binary: Built with gc go1.16.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0526 20:39:39.106350  511005 out.go:291] Setting OutFile to fd 1 ...
	I0526 20:39:39.106518  511005 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0526 20:39:39.106527  511005 out.go:304] Setting ErrFile to fd 2...
	I0526 20:39:39.106532  511005 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0526 20:39:39.106628  511005 root.go:316] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/bin
	W0526 20:39:39.106735  511005 root.go:291] Error reading config file at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/config/config.json: open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/config/config.json: no such file or directory
	I0526 20:39:39.106865  511005 out.go:298] Setting JSON to true
	I0526 20:39:39.141408  511005 start.go:110] hostinfo: {"hostname":"debian-jenkins-agent-4","uptime":15742,"bootTime":1622045838,"procs":139,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-15-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0526 20:39:39.141468  511005 start.go:120] virtualization: kvm guest
	I0526 20:39:39.143878  511005 notify.go:169] Checking for updates...
	W0526 20:39:39.146407  511005 start.go:659] api.Load failed for download-only-20210526203924-510955: filestore "download-only-20210526203924-510955": Docker machine "download-only-20210526203924-510955" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0526 20:39:39.146461  511005 driver.go:331] Setting default libvirt URI to qemu:///system
	W0526 20:39:39.146500  511005 start.go:659] api.Load failed for download-only-20210526203924-510955: filestore "download-only-20210526203924-510955": Docker machine "download-only-20210526203924-510955" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0526 20:39:39.176383  511005 start.go:278] selected driver: kvm2
	I0526 20:39:39.176399  511005 start.go:751] validating driver "kvm2" against &{Name:download-only-20210526203924-510955 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.20.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.22-1620785771-11384@sha256:f5844fe35994179bbad8dda27d4912304a2fedccdf0bf93ce8b2ec2b3b83af1c Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.14.0 Cluste
rName:download-only-20210526203924-510955 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	I0526 20:39:39.176562  511005 install.go:51] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0526 20:39:39.176685  511005 install.go:116] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0526 20:39:39.188092  511005 install.go:136] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.20.0
	I0526 20:39:39.188618  511005 cni.go:93] Creating CNI manager for ""
	I0526 20:39:39.188635  511005 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	I0526 20:39:39.188642  511005 start_flags.go:273] config:
	{Name:download-only-20210526203924-510955 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.20.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.22-1620785771-11384@sha256:f5844fe35994179bbad8dda27d4912304a2fedccdf0bf93ce8b2ec2b3b83af1c Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:download-only-20210526203924-510955 Namespace:default APIServerName:mini
kubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	I0526 20:39:39.188719  511005 iso.go:123] acquiring lock: {Name:mkae6243686e006cb5174618a31875b12ffbed81 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0526 20:39:39.191132  511005 preload.go:98] Checking if preload exists for k8s version v1.20.2 and runtime containerd
	I0526 20:39:39.231605  511005 preload.go:123] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.20.2-containerd-overlay2-amd64.tar.lz4
	I0526 20:39:39.231655  511005 cache.go:54] Caching tarball of preloaded images
	I0526 20:39:39.231933  511005 preload.go:98] Checking if preload exists for k8s version v1.20.2 and runtime containerd
	I0526 20:39:39.263897  511005 preload.go:123] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.20.2-containerd-overlay2-amd64.tar.lz4
	I0526 20:39:39.265888  511005 preload.go:205] getting checksum for preloaded-images-k8s-v11-v1.20.2-containerd-overlay2-amd64.tar.lz4 ...
	I0526 20:39:39.301369  511005 download.go:86] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.20.2-containerd-overlay2-amd64.tar.lz4?checksum=md5:36577dd8813e0d24ccd9f361b0cce3bf -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.20.2-containerd-overlay2-amd64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20210526203924-510955"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:167: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.2/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.0-alpha.1/json-events (18.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.0-alpha.1/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-20210526203924-510955 --force --alsologtostderr --kubernetes-version=v1.22.0-alpha.1 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-20210526203924-510955 --force --alsologtostderr --kubernetes-version=v1.22.0-alpha.1 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (18.102974822s)
--- PASS: TestDownloadOnly/v1.22.0-alpha.1/json-events (18.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.0-alpha.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.0-alpha.1/preload-exists
--- PASS: TestDownloadOnly/v1.22.0-alpha.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.0-alpha.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.0-alpha.1/binaries
--- PASS: TestDownloadOnly/v1.22.0-alpha.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.0-alpha.1/LogsDuration (4.95s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.0-alpha.1/LogsDuration
aaa_download_only_test.go:166: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-20210526203924-510955
aaa_download_only_test.go:166: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-20210526203924-510955: exit status 85 (4.946119425s)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/05/26 20:39:49
	Running on machine: debian-jenkins-agent-4
	Binary: Built with gc go1.16.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0526 20:39:49.191047  511041 out.go:291] Setting OutFile to fd 1 ...
	I0526 20:39:49.191249  511041 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0526 20:39:49.191259  511041 out.go:304] Setting ErrFile to fd 2...
	I0526 20:39:49.191263  511041 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0526 20:39:49.191360  511041 root.go:316] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/bin
	W0526 20:39:49.191483  511041 root.go:291] Error reading config file at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/config/config.json: open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/config/config.json: no such file or directory
	I0526 20:39:49.191609  511041 out.go:298] Setting JSON to true
	I0526 20:39:49.226180  511041 start.go:110] hostinfo: {"hostname":"debian-jenkins-agent-4","uptime":15752,"bootTime":1622045838,"procs":136,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-15-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0526 20:39:49.226243  511041 start.go:120] virtualization: kvm guest
	I0526 20:39:49.228703  511041 notify.go:169] Checking for updates...
	W0526 20:39:49.230869  511041 start.go:659] api.Load failed for download-only-20210526203924-510955: filestore "download-only-20210526203924-510955": Docker machine "download-only-20210526203924-510955" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0526 20:39:49.230910  511041 driver.go:331] Setting default libvirt URI to qemu:///system
	W0526 20:39:49.230930  511041 start.go:659] api.Load failed for download-only-20210526203924-510955: filestore "download-only-20210526203924-510955": Docker machine "download-only-20210526203924-510955" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0526 20:39:49.260102  511041 start.go:278] selected driver: kvm2
	I0526 20:39:49.260127  511041 start.go:751] validating driver "kvm2" against &{Name:download-only-20210526203924-510955 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.20.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.22-1620785771-11384@sha256:f5844fe35994179bbad8dda27d4912304a2fedccdf0bf93ce8b2ec2b3b83af1c Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 Cluste
rName:download-only-20210526203924-510955 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	I0526 20:39:49.260256  511041 install.go:51] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0526 20:39:49.260381  511041 install.go:116] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0526 20:39:49.270990  511041 install.go:136] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.20.0
	I0526 20:39:49.271408  511041 cni.go:93] Creating CNI manager for ""
	I0526 20:39:49.271421  511041 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	I0526 20:39:49.271442  511041 start_flags.go:273] config:
	{Name:download-only-20210526203924-510955 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.20.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.22-1620785771-11384@sha256:f5844fe35994179bbad8dda27d4912304a2fedccdf0bf93ce8b2ec2b3b83af1c Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.0-alpha.1 ClusterName:download-only-20210526203924-510955 Namespace:default APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	I0526 20:39:49.271522  511041 iso.go:123] acquiring lock: {Name:mkae6243686e006cb5174618a31875b12ffbed81 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0526 20:39:49.273216  511041 preload.go:98] Checking if preload exists for k8s version v1.22.0-alpha.1 and runtime containerd
	I0526 20:39:49.317667  511041 preload.go:123] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.22.0-alpha.1-containerd-overlay2-amd64.tar.lz4
	I0526 20:39:49.317705  511041 cache.go:54] Caching tarball of preloaded images
	I0526 20:39:49.318853  511041 preload.go:98] Checking if preload exists for k8s version v1.22.0-alpha.1 and runtime containerd
	I0526 20:39:49.346372  511041 preload.go:123] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.22.0-alpha.1-containerd-overlay2-amd64.tar.lz4
	I0526 20:39:49.348716  511041 preload.go:205] getting checksum for preloaded-images-k8s-v11-v1.22.0-alpha.1-containerd-overlay2-amd64.tar.lz4 ...
	I0526 20:39:49.383390  511041 download.go:86] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.22.0-alpha.1-containerd-overlay2-amd64.tar.lz4?checksum=md5:c3fdf78300b91571c8e3f29a851b7c39 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.22.0-alpha.1-containerd-overlay2-amd64.tar.lz4
	I0526 20:39:56.137049  511041 preload.go:215] saving checksum for preloaded-images-k8s-v11-v1.22.0-alpha.1-containerd-overlay2-amd64.tar.lz4 ...
	I0526 20:39:56.137138  511041 preload.go:222] verifying checksumm of /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.22.0-alpha.1-containerd-overlay2-amd64.tar.lz4 ...
	I0526 20:39:58.056933  511041 cache.go:57] Finished verifying existence of preloaded tar for  v1.22.0-alpha.1 on containerd
	I0526 20:39:58.057094  511041 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/download-only-20210526203924-510955/config.json ...
	I0526 20:39:58.057524  511041 download.go:86] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.22.0-alpha.1/bin/linux/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.22.0-alpha.1/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/linux/v1.22.0-alpha.1/kubectl
	I0526 20:39:58.057593  511041 download.go:86] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.22.0-alpha.1/bin/linux/amd64/kubelet?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.22.0-alpha.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/linux/v1.22.0-alpha.1/kubelet
	I0526 20:39:58.057620  511041 download.go:86] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.22.0-alpha.1/bin/linux/amd64/kubeadm?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.22.0-alpha.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/cache/linux/v1.22.0-alpha.1/kubeadm
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20210526203924-510955"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:167: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.22.0-alpha.1/LogsDuration (4.95s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-20210526203924-510955
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.22s)

                                                
                                    
x
+
TestOffline (232.74s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-containerd-20210526214750-510955 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=containerd

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-containerd-20210526214750-510955 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=containerd: (3m51.69497372s)
helpers_test.go:171: Cleaning up "offline-containerd-20210526214750-510955" profile ...
helpers_test.go:174: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-containerd-20210526214750-510955
helpers_test.go:174: (dbg) Done: out/minikube-linux-amd64 delete -p offline-containerd-20210526214750-510955: (1.045547761s)
--- PASS: TestOffline (232.74s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.68s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:374: metrics-server stabilized in 23.519626ms

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:376: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
helpers_test.go:335: "metrics-server-7894db45f8-phj9t" [3b500aed-08ff-49f1-a145-a4dff3e6c469] Running

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:376: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.015826345s
addons_test.go:382: (dbg) Run:  kubectl --context addons-20210526204012-510955 top pods -n kube-system

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:399: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210526204012-510955 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.68s)

                                                
                                    
x
+
TestCertOptions (73.36s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:47: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-20210526215143-510955 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=containerd

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:47: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-20210526215143-510955 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=containerd: (1m12.371861628s)
cert_options_test.go:58: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-20210526215143-510955 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:73: (dbg) Run:  kubectl --context cert-options-20210526215143-510955 config view
helpers_test.go:171: Cleaning up "cert-options-20210526215143-510955" profile ...
helpers_test.go:174: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-20210526215143-510955
--- PASS: TestCertOptions (73.36s)

                                                
                                    
x
+
TestErrorSpam/start (25.44s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:210: Cleaning up 1 logfile(s) ...
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 start --dry-run
--- PASS: TestErrorSpam/start (25.44s)

                                                
                                    
x
+
TestErrorSpam/status (25.46s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:210: Cleaning up 0 logfile(s) ...
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 status
--- PASS: TestErrorSpam/status (25.46s)

                                                
                                    
x
+
TestErrorSpam/pause (2.47s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:210: Cleaning up 0 logfile(s) ...
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 pause
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 pause
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 pause
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 pause
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 pause
--- PASS: TestErrorSpam/pause (2.47s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.64s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:210: Cleaning up 0 logfile(s) ...
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 unpause
--- PASS: TestErrorSpam/unpause (0.64s)

                                                
                                    
x
+
TestErrorSpam/stop (92.48s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:210: Cleaning up 0 logfile(s) ...
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 stop
error_spam_test.go:168: (dbg) Done: out/minikube-linux-amd64 -p nospam-20210526210930-510955 --log_dir /tmp/nospam-20210526210930-510955 stop: (1m32.477816428s)
--- PASS: TestErrorSpam/stop (92.48s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1563: local sync path: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/files/etc/test/nested/copy/510955/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (168.92s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:541: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20210526211257-510955 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=containerd
E0526 21:14:19.840545  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/addons-20210526204012-510955/client.crt: no such file or directory
E0526 21:14:19.846603  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/addons-20210526204012-510955/client.crt: no such file or directory
E0526 21:14:19.856899  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/addons-20210526204012-510955/client.crt: no such file or directory
E0526 21:14:19.877218  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/addons-20210526204012-510955/client.crt: no such file or directory
E0526 21:14:19.917672  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/addons-20210526204012-510955/client.crt: no such file or directory
E0526 21:14:19.997945  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/addons-20210526204012-510955/client.crt: no such file or directory
E0526 21:14:20.158350  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/addons-20210526204012-510955/client.crt: no such file or directory
E0526 21:14:20.478871  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/addons-20210526204012-510955/client.crt: no such file or directory
E0526 21:14:21.119922  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/addons-20210526204012-510955/client.crt: no such file or directory
E0526 21:14:22.400607  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/addons-20210526204012-510955/client.crt: no such file or directory
E0526 21:14:24.961406  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/addons-20210526204012-510955/client.crt: no such file or directory
E0526 21:14:30.082347  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/addons-20210526204012-510955/client.crt: no such file or directory
E0526 21:14:40.322968  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/addons-20210526204012-510955/client.crt: no such file or directory
E0526 21:15:00.803510  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/addons-20210526204012-510955/client.crt: no such file or directory
E0526 21:15:41.763821  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/addons-20210526204012-510955/client.crt: no such file or directory
functional_test.go:541: (dbg) Done: out/minikube-linux-amd64 start -p functional-20210526211257-510955 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=containerd: (2m48.914919296s)
--- PASS: TestFunctional/serial/StartWithProxy (168.92s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (4.94s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:585: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20210526211257-510955 --alsologtostderr -v=8
functional_test.go:585: (dbg) Done: out/minikube-linux-amd64 start -p functional-20210526211257-510955 --alsologtostderr -v=8: (4.941578986s)
functional_test.go:589: soft start took 4.94236929s for "functional-20210526211257-510955" cluster.
--- PASS: TestFunctional/serial/SoftStart (4.94s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:605: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.19s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:618: (dbg) Run:  kubectl --context functional-20210526211257-510955 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.19s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.79s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:910: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210526211257-510955 cache add k8s.gcr.io/pause:3.1
functional_test.go:910: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210526211257-510955 cache add k8s.gcr.io/pause:3.3
functional_test.go:910: (dbg) Done: out/minikube-linux-amd64 -p functional-20210526211257-510955 cache add k8s.gcr.io/pause:3.3: (1.459302789s)
functional_test.go:910: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210526211257-510955 cache add k8s.gcr.io/pause:latest
functional_test.go:910: (dbg) Done: out/minikube-linux-amd64 -p functional-20210526211257-510955 cache add k8s.gcr.io/pause:latest: (1.350319175s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.79s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.32s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:940: (dbg) Run:  docker build -t minikube-local-cache-test:functional-20210526211257-510955 /tmp/functional-20210526211257-510955390370721
functional_test.go:945: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210526211257-510955 cache add minikube-local-cache-test:functional-20210526211257-510955
functional_test.go:945: (dbg) Done: out/minikube-linux-amd64 -p functional-20210526211257-510955 cache add minikube-local-cache-test:functional-20210526211257-510955: (1.138258576s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.32s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3
functional_test.go:952: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:959: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:972: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210526211257-510955 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.69s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:994: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210526211257-510955 ssh sudo crictl rmi k8s.gcr.io/pause:latest
functional_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210526211257-510955 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20210526211257-510955 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: exit status 1 (219.304424ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "k8s.gcr.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1005: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210526211257-510955 cache reload
functional_test.go:1005: (dbg) Done: out/minikube-linux-amd64 -p functional-20210526211257-510955 cache reload: (2.001868588s)
functional_test.go:1010: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210526211257-510955 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.69s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1019: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:3.1
functional_test.go:1019: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:636: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210526211257-510955 kubectl -- --context functional-20210526211257-510955 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:655: (dbg) Run:  out/kubectl --context functional-20210526211257-510955 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (109.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:669: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20210526211257-510955 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0526 21:17:03.685103  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/addons-20210526204012-510955/client.crt: no such file or directory
functional_test.go:669: (dbg) Done: out/minikube-linux-amd64 start -p functional-20210526211257-510955 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (1m49.073114328s)
functional_test.go:673: restart took 1m49.07332151s for "functional-20210526211257-510955" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (109.07s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:720: (dbg) Run:  kubectl --context functional-20210526211257-510955 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:734: etcd phase: Running
functional_test.go:744: etcd status: Ready
functional_test.go:734: kube-apiserver phase: Running
functional_test.go:744: kube-apiserver status: Ready
functional_test.go:734: kube-controller-manager phase: Running
functional_test.go:744: kube-controller-manager status: Ready
functional_test.go:734: kube-scheduler phase: Running
functional_test.go:744: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210526211257-510955 config unset cpus
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210526211257-510955 config get cpus
functional_test.go:1045: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20210526211257-510955 config get cpus: exit status 14 (66.672133ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210526211257-510955 config set cpus 2

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210526211257-510955 config get cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210526211257-510955 config unset cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210526211257-510955 config get cpus
functional_test.go:1045: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20210526211257-510955 config get cpus: exit status 14 (79.172558ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (5.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:811: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url -p functional-20210526211257-510955 --alsologtostderr -v=1]

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:816: (dbg) stopping [out/minikube-linux-amd64 dashboard --url -p functional-20210526211257-510955 --alsologtostderr -v=1] ...
helpers_test.go:499: unable to kill pid 526734: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (5.42s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:873: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20210526211257-510955 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd
functional_test.go:873: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-20210526211257-510955 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd: exit status 23 (164.69597ms)

                                                
                                                
-- stdout --
	* [functional-20210526211257-510955] minikube v1.20.0 on Debian 9.13 (kvm/amd64)
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/kubeconfig
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube
	  - MINIKUBE_LOCATION=11504
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0526 21:18:14.786396  526439 out.go:291] Setting OutFile to fd 1 ...
	I0526 21:18:14.786558  526439 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0526 21:18:14.786566  526439 out.go:304] Setting ErrFile to fd 2...
	I0526 21:18:14.786569  526439 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0526 21:18:14.786665  526439 root.go:316] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/bin
	I0526 21:18:14.786871  526439 out.go:298] Setting JSON to false
	I0526 21:18:14.827069  526439 start.go:110] hostinfo: {"hostname":"debian-jenkins-agent-4","uptime":18057,"bootTime":1622045838,"procs":173,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-15-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0526 21:18:14.827143  526439 start.go:120] virtualization: kvm guest
	I0526 21:18:14.828943  526439 out.go:170] * [functional-20210526211257-510955] minikube v1.20.0 on Debian 9.13 (kvm/amd64)
	I0526 21:18:14.830314  526439 out.go:170]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/kubeconfig
	I0526 21:18:14.831609  526439 out.go:170]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0526 21:18:14.832809  526439 out.go:170]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube
	I0526 21:18:14.834079  526439 out.go:170]   - MINIKUBE_LOCATION=11504
	I0526 21:18:14.834788  526439 main.go:128] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0526 21:18:14.834852  526439 main.go:128] libmachine: Launching plugin server for driver kvm2
	I0526 21:18:14.845470  526439 main.go:128] libmachine: Plugin server listening at address 127.0.0.1:36861
	I0526 21:18:14.845924  526439 main.go:128] libmachine: () Calling .GetVersion
	I0526 21:18:14.846444  526439 main.go:128] libmachine: Using API Version  1
	I0526 21:18:14.846463  526439 main.go:128] libmachine: () Calling .SetConfigRaw
	I0526 21:18:14.846844  526439 main.go:128] libmachine: () Calling .GetMachineName
	I0526 21:18:14.847032  526439 main.go:128] libmachine: (functional-20210526211257-510955) Calling .DriverName
	I0526 21:18:14.847219  526439 driver.go:331] Setting default libvirt URI to qemu:///system
	I0526 21:18:14.847744  526439 main.go:128] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0526 21:18:14.847784  526439 main.go:128] libmachine: Launching plugin server for driver kvm2
	I0526 21:18:14.858400  526439 main.go:128] libmachine: Plugin server listening at address 127.0.0.1:35303
	I0526 21:18:14.858805  526439 main.go:128] libmachine: () Calling .GetVersion
	I0526 21:18:14.859196  526439 main.go:128] libmachine: Using API Version  1
	I0526 21:18:14.859218  526439 main.go:128] libmachine: () Calling .SetConfigRaw
	I0526 21:18:14.859582  526439 main.go:128] libmachine: () Calling .GetMachineName
	I0526 21:18:14.859737  526439 main.go:128] libmachine: (functional-20210526211257-510955) Calling .DriverName
	I0526 21:18:14.888986  526439 out.go:170] * Using the kvm2 driver based on existing profile
	I0526 21:18:14.889008  526439 start.go:278] selected driver: kvm2
	I0526 21:18:14.889015  526439 start.go:751] validating driver "kvm2" against &{Name:functional-20210526211257-510955 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.20.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.22-1620785771-11384@sha256:f5844fe35994179bbad8dda27d4912304a2fedccdf0bf93ce8b2ec2b3b83af1c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterNa
me:functional-20210526211257-510955 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.39.137 Port:8441 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false registry:false r
egistry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	I0526 21:18:14.889164  526439 start.go:762] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0526 21:18:14.891042  526439 out.go:170] 
	W0526 21:18:14.891138  526439 out.go:235] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0526 21:18:14.892413  526439 out.go:170] 

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:888: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20210526211257-510955 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:763: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210526211257-510955 status

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:769: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210526211257-510955 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:780: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210526211257-510955 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.17s)

                                                
                                    
x
+
TestFunctional/parallel/LogsCmd (1.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/LogsCmd
=== PAUSE TestFunctional/parallel/LogsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/LogsCmd
functional_test.go:1081: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210526211257-510955 logs

                                                
                                                
=== CONT  TestFunctional/parallel/LogsCmd
functional_test.go:1081: (dbg) Done: out/minikube-linux-amd64 -p functional-20210526211257-510955 logs: (1.85136686s)
--- PASS: TestFunctional/parallel/LogsCmd (1.85s)

                                                
                                    
x
+
TestFunctional/parallel/LogsFileCmd (1.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/LogsFileCmd
=== PAUSE TestFunctional/parallel/LogsFileCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/LogsFileCmd

                                                
                                                
=== CONT  TestFunctional/parallel/LogsFileCmd
functional_test.go:1097: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210526211257-510955 logs --file /tmp/functional-20210526211257-510955073349243/logs.txt

                                                
                                                
=== CONT  TestFunctional/parallel/LogsFileCmd
functional_test.go:1097: (dbg) Done: out/minikube-linux-amd64 -p functional-20210526211257-510955 logs --file /tmp/functional-20210526211257-510955073349243/logs.txt: (1.691955377s)
--- PASS: TestFunctional/parallel/LogsFileCmd (1.69s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd (5.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd
=== PAUSE TestFunctional/parallel/MountCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-20210526211257-510955 /tmp/mounttest531032204:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1622063888955410921" to /tmp/mounttest531032204/created-by-test
functional_test_mount_test.go:107: wrote "test-1622063888955410921" to /tmp/mounttest531032204/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1622063888955410921" to /tmp/mounttest531032204/test-1622063888955410921
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210526211257-510955 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20210526211257-510955 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (269.070434ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210526211257-510955 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210526211257-510955 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 May 26 21:18 created-by-test
-rw-r--r-- 1 docker docker 24 May 26 21:18 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 May 26 21:18 test-1622063888955410921
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210526211257-510955 ssh cat /mount-9p/test-1622063888955410921
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-20210526211257-510955 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:335: "busybox-mount" [8a134975-b93c-4a70-9c42-18e90b85b177] Pending
helpers_test.go:335: "busybox-mount" [8a134975-b93c-4a70-9c42-18e90b85b177] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd
helpers_test.go:335: "busybox-mount" [8a134975-b93c-4a70-9c42-18e90b85b177] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd: integration-test=busybox-mount healthy within 3.011487996s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-20210526211257-510955 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210526211257-510955 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210526211257-510955 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210526211257-510955 ssh "sudo umount -f /mount-9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-20210526211257-510955 /tmp/mounttest531032204:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd (5.94s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd (15.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd
=== PAUSE TestFunctional/parallel/ServiceCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1273: (dbg) Run:  kubectl --context functional-20210526211257-510955 create deployment hello-node --image=k8s.gcr.io/echoserver:1.8

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1279: (dbg) Run:  kubectl --context functional-20210526211257-510955 expose deployment hello-node --type=NodePort --port=8080

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1284: (dbg) TestFunctional/parallel/ServiceCmd: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:335: "hello-node-6cbfcd7cbc-5z6hw" [eef0c6e2-a5e8-4d8b-a5b7-90e5e32ea448] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
helpers_test.go:335: "hello-node-6cbfcd7cbc-5z6hw" [eef0c6e2-a5e8-4d8b-a5b7-90e5e32ea448] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1284: (dbg) TestFunctional/parallel/ServiceCmd: app=hello-node healthy within 14.142164285s
functional_test.go:1288: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210526211257-510955 service list

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1301: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210526211257-510955 service --namespace=default --https --url hello-node
functional_test.go:1310: found endpoint: https://192.168.39.137:31815
functional_test.go:1321: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210526211257-510955 service hello-node --url --format={{.IP}}
functional_test.go:1330: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210526211257-510955 service hello-node --url
functional_test.go:1336: found endpoint for hello-node: http://192.168.39.137:31815
functional_test.go:1347: Attempting to fetch http://192.168.39.137:31815 ...
functional_test.go:1366: http://192.168.39.137:31815: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-6cbfcd7cbc-5z6hw

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.137:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.137:31815
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmd (15.80s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1381: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210526211257-510955 addons list
functional_test.go:1392: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210526211257-510955 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (33.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:335: "storage-provisioner" [7badfafc-ff25-45f3-bff1-6036a279a334] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.010189421s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-20210526211257-510955 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-20210526211257-510955 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-20210526211257-510955 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-20210526211257-510955 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-20210526211257-510955 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:335: "sp-pod" [98e22fb9-6143-4039-a6ef-52934163bfa5] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:335: "sp-pod" [98e22fb9-6143-4039-a6ef-52934163bfa5] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:335: "sp-pod" [98e22fb9-6143-4039-a6ef-52934163bfa5] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 14.014468081s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-20210526211257-510955 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-20210526211257-510955 delete -f testdata/storage-provisioner/pod.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-20210526211257-510955 delete -f testdata/storage-provisioner/pod.yaml: (4.604401567s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-20210526211257-510955 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:335: "sp-pod" [b462fe72-65d4-497f-9d5b-b13ef8455d8d] Pending
helpers_test.go:335: "sp-pod" [b462fe72-65d4-497f-9d5b-b13ef8455d8d] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:335: "sp-pod" [b462fe72-65d4-497f-9d5b-b13ef8455d8d] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.019798866s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-20210526211257-510955 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (33.95s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1414: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210526211257-510955 ssh "echo hello"

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1431: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210526211257-510955 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
functional_test.go:1466: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210526211257-510955 cp testdata/cp-test.txt /home/docker/cp-test.txt

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
functional_test.go:1480: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210526211257-510955 ssh "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (28.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1515: (dbg) Run:  kubectl --context functional-20210526211257-510955 replace --force -f testdata/mysql.yaml
functional_test.go:1520: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:335: "mysql-9bbbc5bbb-jmzvs" [b64f714a-e94e-4e8f-83c0-b843b205f140] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:335: "mysql-9bbbc5bbb-jmzvs" [b64f714a-e94e-4e8f-83c0-b843b205f140] Running

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1520: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 20.01254096s
functional_test.go:1527: (dbg) Run:  kubectl --context functional-20210526211257-510955 exec mysql-9bbbc5bbb-jmzvs -- mysql -ppassword -e "show databases;"

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1527: (dbg) Non-zero exit: kubectl --context functional-20210526211257-510955 exec mysql-9bbbc5bbb-jmzvs -- mysql -ppassword -e "show databases;": exit status 1 (222.098497ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1527: (dbg) Run:  kubectl --context functional-20210526211257-510955 exec mysql-9bbbc5bbb-jmzvs -- mysql -ppassword -e "show databases;"
functional_test.go:1527: (dbg) Non-zero exit: kubectl --context functional-20210526211257-510955 exec mysql-9bbbc5bbb-jmzvs -- mysql -ppassword -e "show databases;": exit status 1 (312.952996ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1527: (dbg) Run:  kubectl --context functional-20210526211257-510955 exec mysql-9bbbc5bbb-jmzvs -- mysql -ppassword -e "show databases;"

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1527: (dbg) Non-zero exit: kubectl --context functional-20210526211257-510955 exec mysql-9bbbc5bbb-jmzvs -- mysql -ppassword -e "show databases;": exit status 1 (260.22971ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1527: (dbg) Run:  kubectl --context functional-20210526211257-510955 exec mysql-9bbbc5bbb-jmzvs -- mysql -ppassword -e "show databases;"
functional_test.go:1527: (dbg) Non-zero exit: kubectl --context functional-20210526211257-510955 exec mysql-9bbbc5bbb-jmzvs -- mysql -ppassword -e "show databases;": exit status 1 (253.416853ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1527: (dbg) Run:  kubectl --context functional-20210526211257-510955 exec mysql-9bbbc5bbb-jmzvs -- mysql -ppassword -e "show databases;"
2021/05/26 21:18:21 [DEBUG] GET http://127.0.0.1:45023/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/MySQL (28.74s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1611: Checking for existence of /etc/test/nested/copy/510955/hosts within VM
functional_test.go:1612: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210526211257-510955 ssh "sudo cat /etc/test/nested/copy/510955/hosts"
functional_test.go:1617: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1652: Checking for existence of /etc/ssl/certs/510955.pem within VM

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1653: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210526211257-510955 ssh "sudo cat /etc/ssl/certs/510955.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1652: Checking for existence of /usr/share/ca-certificates/510955.pem within VM
functional_test.go:1653: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210526211257-510955 ssh "sudo cat /usr/share/ca-certificates/510955.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1652: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1653: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210526211257-510955 ssh "sudo cat /etc/ssl/certs/51391683.0"
--- PASS: TestFunctional/parallel/CertSync (1.00s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:197: (dbg) Run:  kubectl --context functional-20210526211257-510955 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/LoadImage (2.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/LoadImage
=== PAUSE TestFunctional/parallel/LoadImage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/LoadImage

                                                
                                                
=== CONT  TestFunctional/parallel/LoadImage
functional_test.go:220: (dbg) Run:  docker pull busybox:1.33

                                                
                                                
=== CONT  TestFunctional/parallel/LoadImage
functional_test.go:227: (dbg) Run:  docker tag busybox:1.33 docker.io/library/busybox:load-functional-20210526211257-510955

                                                
                                                
=== CONT  TestFunctional/parallel/LoadImage
functional_test.go:233: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210526211257-510955 image load docker.io/library/busybox:load-functional-20210526211257-510955

                                                
                                                
=== CONT  TestFunctional/parallel/LoadImage
functional_test.go:233: (dbg) Done: out/minikube-linux-amd64 -p functional-20210526211257-510955 image load docker.io/library/busybox:load-functional-20210526211257-510955: (1.855771543s)
functional_test.go:303: (dbg) Run:  out/minikube-linux-amd64 ssh -p functional-20210526211257-510955 -- sudo crictl inspecti docker.io/library/busybox:load-functional-20210526211257-510955
--- PASS: TestFunctional/parallel/LoadImage (2.47s)

                                                
                                    
x
+
TestFunctional/parallel/RemoveImage (3.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/RemoveImage
=== PAUSE TestFunctional/parallel/RemoveImage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/RemoveImage
functional_test.go:261: (dbg) Run:  docker pull busybox:1.32
functional_test.go:268: (dbg) Run:  docker tag busybox:1.32 docker.io/library/busybox:remove-functional-20210526211257-510955
functional_test.go:274: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210526211257-510955 image load docker.io/library/busybox:remove-functional-20210526211257-510955

                                                
                                                
=== CONT  TestFunctional/parallel/RemoveImage
functional_test.go:274: (dbg) Done: out/minikube-linux-amd64 -p functional-20210526211257-510955 image load docker.io/library/busybox:remove-functional-20210526211257-510955: (3.040922219s)
functional_test.go:280: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210526211257-510955 image rm docker.io/library/busybox:remove-functional-20210526211257-510955
functional_test.go:317: (dbg) Run:  out/minikube-linux-amd64 ssh -p functional-20210526211257-510955 -- sudo crictl images
--- PASS: TestFunctional/parallel/RemoveImage (3.89s)

                                                
                                    
x
+
TestFunctional/parallel/BuildImage (5.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/BuildImage
=== PAUSE TestFunctional/parallel/BuildImage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/BuildImage
functional_test.go:369: (dbg) Run:  out/minikube-linux-amd64 ssh -p functional-20210526211257-510955 -- nohup sudo -b buildkitd --oci-worker=false --containerd-worker=true --containerd-worker-namespace=k8s.io

                                                
                                                
=== CONT  TestFunctional/parallel/BuildImage
functional_test.go:341: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210526211257-510955 image build -t localhost/my-image:functional-20210526211257-510955 testdata/build

                                                
                                                
=== CONT  TestFunctional/parallel/BuildImage
functional_test.go:341: (dbg) Done: out/minikube-linux-amd64 -p functional-20210526211257-510955 image build -t localhost/my-image:functional-20210526211257-510955 testdata/build: (4.599888253s)
functional_test.go:349: (dbg) Stderr: out/minikube-linux-amd64 -p functional-20210526211257-510955 image build -t localhost/my-image:functional-20210526211257-510955 testdata/build:
#1 [internal] load build definition from Dockerfile
#1 sha256:3df8d9d6fa5550a958208ca07cd76e4fa99e9b21596a086d0d4f0c508bef8d44
#1 transferring dockerfile: 77B done
#1 DONE 0.1s

                                                
                                                
#2 [internal] load .dockerignore
#2 sha256:eb7779f369dca8a2842ba2f7336a1a8555de8157142ec3c78ed8fd5c42a022c1
#2 transferring context: 2B done
#2 DONE 0.1s

                                                
                                                
#3 [internal] load metadata for docker.io/library/busybox:latest
#3 sha256:da853382a7535e068feae4d80bdd0ad2567df3d5cd484fd68f919294d091b053
#3 DONE 1.0s

                                                
                                                
#4 [1/3] FROM docker.io/library/busybox@sha256:b5fc1d7b2e4ea86a06b0cf88de915a2c43a99a00b6b3c0af731e5f4c07ae8eff
#4 sha256:36c85af59bdf128d45b487702b7e20a1b8175e823f7f4803c2d637fef1d54bea
#4 resolve docker.io/library/busybox@sha256:b5fc1d7b2e4ea86a06b0cf88de915a2c43a99a00b6b3c0af731e5f4c07ae8eff
#4 ...

                                                
                                                
#6 [internal] load build context
#6 sha256:83e95bc7a79a25ce9a562057ad318b54b08119670415ce220983fe2999b407b6
#6 transferring context: 62B done
#6 DONE 0.3s

                                                
                                                
#4 [1/3] FROM docker.io/library/busybox@sha256:b5fc1d7b2e4ea86a06b0cf88de915a2c43a99a00b6b3c0af731e5f4c07ae8eff
#4 sha256:36c85af59bdf128d45b487702b7e20a1b8175e823f7f4803c2d637fef1d54bea
#4 resolve docker.io/library/busybox@sha256:b5fc1d7b2e4ea86a06b0cf88de915a2c43a99a00b6b3c0af731e5f4c07ae8eff 0.3s done
#4 DONE 0.1s

                                                
                                                
#5 [2/3] RUN true
#5 sha256:dddd996461ebab3704f76b05eb7877d988339dde20a825ac1d25fc94cd8e31b6
#5 DONE 1.1s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 sha256:eed33dbeee68900cd9e54ee1e6be3cea19a56a75b4f90b3294a3d39d208e92df
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 sha256:e8c613e07b0b7ff33893b694f7759a10d42e180f2b4dc349fb57dc6b71dcab00
#8 exporting layers
#8 exporting layers 0.6s done
#8 exporting manifest sha256:bf7e67eb1593bf26c0180ca837432630cd701b58f2af123088fd19f1681ef0f1 0.0s done
#8 exporting config sha256:38fdb1702ffd9337154de9114c74bdb292829cf9986e92a6380fb9d2a635b2f7 0.0s done
#8 naming to localhost/my-image:functional-20210526211257-510955 0.0s done
#8 DONE 0.7s
functional_test.go:303: (dbg) Run:  out/minikube-linux-amd64 ssh -p functional-20210526211257-510955 -- sudo crictl inspecti localhost/my-image:functional-20210526211257-510955
--- PASS: TestFunctional/parallel/BuildImage (5.14s)

                                                
                                    
x
+
TestFunctional/parallel/ListImages (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ListImages
=== PAUSE TestFunctional/parallel/ListImages

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ListImages

                                                
                                                
=== CONT  TestFunctional/parallel/ListImages
functional_test.go:385: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210526211257-510955 image ls

                                                
                                                
=== CONT  TestFunctional/parallel/ListImages
functional_test.go:390: (dbg) Stdout: out/minikube-linux-amd64 -p functional-20210526211257-510955 image ls:
k8s.gcr.io/pause:latest
k8s.gcr.io/pause:3.3
k8s.gcr.io/pause:3.2
k8s.gcr.io/pause:3.1
k8s.gcr.io/kube-scheduler:v1.20.2
k8s.gcr.io/kube-proxy:v1.20.2
k8s.gcr.io/kube-controller-manager:v1.20.2
k8s.gcr.io/kube-apiserver:v1.20.2
k8s.gcr.io/etcd:3.4.13-0
k8s.gcr.io/coredns:1.7.0
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/library/minikube-local-cache-test:functional-20210526211257-510955
docker.io/kubernetesui/metrics-scraper:v1.0.4
docker.io/kubernetesui/dashboard:v2.1.0
docker.io/kindest/kindnetd:v20210326-1e038dc5
--- PASS: TestFunctional/parallel/ListImages (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:126: (dbg) daemon: [out/minikube-linux-amd64 -p functional-20210526211257-510955 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1118: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1122: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1156: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1161: Took "279.37269ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1170: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1175: Took "68.113208ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1206: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1211: Took "289.206948ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1219: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1224: Took "83.985299ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:1746: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210526211257-510955 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:1746: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210526211257-510955 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:1746: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210526211257-510955 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:164: (dbg) Run:  kubectl --context functional-20210526211257-510955 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:229: tunnel at http://10.98.67.197 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:364: (dbg) stopping [out/minikube-linux-amd64 -p functional-20210526211257-510955 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/delete_busybox_image (0.08s)

                                                
                                                
=== RUN   TestFunctional/delete_busybox_image
functional_test.go:164: (dbg) Run:  docker rmi -f docker.io/library/busybox:load-functional-20210526211257-510955
functional_test.go:169: (dbg) Run:  docker rmi -f docker.io/library/busybox:remove-functional-20210526211257-510955
--- PASS: TestFunctional/delete_busybox_image (0.08s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:176: (dbg) Run:  docker rmi -f localhost/my-image:functional-20210526211257-510955
--- PASS: TestFunctional/delete_my-image_image (0.03s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:184: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-20210526211257-510955
--- PASS: TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.33s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:146: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-20210526212238-510955 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:146: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-20210526212238-510955 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (97.847872ms)

                                                
                                                
-- stdout --
	{"data":{"currentstep":"0","message":"[json-output-error-20210526212238-510955] minikube v1.20.0 on Debian 9.13 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"},"datacontenttype":"application/json","id":"86efb902-1795-44eb-973f-0cb7f463d497","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.step"}
	{"data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/kubeconfig"},"datacontenttype":"application/json","id":"efcfd433-51cd-461a-9aee-1ac7093bf2a5","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"},"datacontenttype":"application/json","id":"5905e6cc-65a6-4d45-8eb6-0fbcd376c7b5","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube"},"datacontenttype":"application/json","id":"a35cd53a-0d5b-4e57-afd2-36028de8559d","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"message":"MINIKUBE_LOCATION=11504"},"datacontenttype":"application/json","id":"8b498fa7-c820-49e9-a67e-180156aac1aa","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""},"datacontenttype":"application/json","id":"8fe14a05-a297-4246-9b8b-96fc9ff21413","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.error"}

                                                
                                                
-- /stdout --
helpers_test.go:171: Cleaning up "json-output-error-20210526212238-510955" profile ...
helpers_test.go:174: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-20210526212238-510955
--- PASS: TestErrorJSONOutput (0.33s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (219.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20210526212238-510955 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0526 21:22:50.134647  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/functional-20210526211257-510955/client.crt: no such file or directory
E0526 21:22:50.140723  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/functional-20210526211257-510955/client.crt: no such file or directory
E0526 21:22:50.150971  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/functional-20210526211257-510955/client.crt: no such file or directory
E0526 21:22:50.171197  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/functional-20210526211257-510955/client.crt: no such file or directory
E0526 21:22:50.211441  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/functional-20210526211257-510955/client.crt: no such file or directory
E0526 21:22:50.291714  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/functional-20210526211257-510955/client.crt: no such file or directory
E0526 21:22:50.452139  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/functional-20210526211257-510955/client.crt: no such file or directory
E0526 21:22:50.772702  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/functional-20210526211257-510955/client.crt: no such file or directory
E0526 21:22:51.413861  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/functional-20210526211257-510955/client.crt: no such file or directory
E0526 21:22:52.694887  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/functional-20210526211257-510955/client.crt: no such file or directory
E0526 21:22:55.255865  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/functional-20210526211257-510955/client.crt: no such file or directory
E0526 21:23:00.377081  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/functional-20210526211257-510955/client.crt: no such file or directory
E0526 21:23:10.617812  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/functional-20210526211257-510955/client.crt: no such file or directory
E0526 21:23:31.420895  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/functional-20210526211257-510955/client.crt: no such file or directory
E0526 21:24:12.381589  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/functional-20210526211257-510955/client.crt: no such file or directory
E0526 21:24:19.839788  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/addons-20210526204012-510955/client.crt: no such file or directory
E0526 21:25:34.302406  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/functional-20210526211257-510955/client.crt: no such file or directory
multinode_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20210526212238-510955 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (3m39.531006026s)
multinode_test.go:86: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210526212238-510955 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (219.94s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:431: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210526212238-510955 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:436: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210526212238-510955 -- rollout status deployment/busybox
multinode_test.go:436: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-20210526212238-510955 -- rollout status deployment/busybox: (2.579461707s)
multinode_test.go:442: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210526212238-510955 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:454: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210526212238-510955 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:462: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210526212238-510955 -- exec busybox-6cd5ff77cb-4g265 -- nslookup kubernetes.io
multinode_test.go:462: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210526212238-510955 -- exec busybox-6cd5ff77cb-dlslt -- nslookup kubernetes.io
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210526212238-510955 -- exec busybox-6cd5ff77cb-4g265 -- nslookup kubernetes.default
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210526212238-510955 -- exec busybox-6cd5ff77cb-dlslt -- nslookup kubernetes.default
multinode_test.go:480: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210526212238-510955 -- exec busybox-6cd5ff77cb-4g265 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:480: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210526212238-510955 -- exec busybox-6cd5ff77cb-dlslt -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.83s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:490: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210526212238-510955 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210526212238-510955 -- exec busybox-6cd5ff77cb-4g265 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:503: (dbg) Run:  out/minikube-linux-amd64 ssh -p multinode-20210526212238-510955 "ip -4 -br -o a s eth0 | tr -s ' ' | cut -d' ' -f3"
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210526212238-510955 -- exec busybox-6cd5ff77cb-dlslt -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:503: (dbg) Run:  out/minikube-linux-amd64 ssh -p multinode-20210526212238-510955 "ip -4 -br -o a s eth0 | tr -s ' ' | cut -d' ' -f3"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.00s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (61.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:105: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-20210526212238-510955 -v 3 --alsologtostderr
multinode_test.go:105: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-20210526212238-510955 -v 3 --alsologtostderr: (1m1.299222035s)
multinode_test.go:111: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210526212238-510955 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (61.87s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.25s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (1.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210526212238-510955 status --output json --alsologtostderr
functional_test.go:1466: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210526212238-510955 cp testdata/cp-test.txt /home/docker/cp-test.txt
functional_test.go:1480: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210526212238-510955 ssh "sudo cat /home/docker/cp-test.txt"
functional_test.go:1466: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210526212238-510955 cp testdata/cp-test.txt multinode-20210526212238-510955-m02:/home/docker/cp-test.txt
functional_test.go:1480: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210526212238-510955 ssh -n multinode-20210526212238-510955-m02 "sudo cat /home/docker/cp-test.txt"
functional_test.go:1466: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210526212238-510955 cp testdata/cp-test.txt multinode-20210526212238-510955-m03:/home/docker/cp-test.txt
functional_test.go:1480: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210526212238-510955 ssh -n multinode-20210526212238-510955-m03 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestMultiNode/serial/CopyFile (1.83s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (1.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:344: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210526212238-510955 node delete m03
multinode_test.go:344: (dbg) Done: out/minikube-linux-amd64 -p multinode-20210526212238-510955 node delete m03: (1.073549901s)
multinode_test.go:350: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210526212238-510955 status --alsologtostderr
multinode_test.go:374: (dbg) Run:  kubectl get nodes
multinode_test.go:382: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (1.59s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (184.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210526212238-510955 stop
E0526 21:29:19.840572  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/addons-20210526204012-510955/client.crt: no such file or directory
E0526 21:30:42.887117  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/addons-20210526204012-510955/client.crt: no such file or directory
multinode_test.go:264: (dbg) Done: out/minikube-linux-amd64 -p multinode-20210526212238-510955 stop: (3m4.208734159s)
multinode_test.go:270: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210526212238-510955 status
multinode_test.go:270: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20210526212238-510955 status: exit status 7 (90.813271ms)

                                                
                                                
-- stdout --
	multinode-20210526212238-510955
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20210526212238-510955-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:277: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210526212238-510955 status --alsologtostderr
multinode_test.go:277: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20210526212238-510955 status --alsologtostderr: exit status 7 (86.27106ms)

                                                
                                                
-- stdout --
	multinode-20210526212238-510955
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20210526212238-510955-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0526 21:32:10.137332  529660 out.go:291] Setting OutFile to fd 1 ...
	I0526 21:32:10.137508  529660 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0526 21:32:10.137518  529660 out.go:304] Setting ErrFile to fd 2...
	I0526 21:32:10.137522  529660 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0526 21:32:10.137628  529660 root.go:316] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/bin
	I0526 21:32:10.137772  529660 out.go:298] Setting JSON to false
	I0526 21:32:10.137795  529660 mustload.go:65] Loading cluster: multinode-20210526212238-510955
	I0526 21:32:10.138034  529660 status.go:253] checking status of multinode-20210526212238-510955 ...
	I0526 21:32:10.138363  529660 main.go:128] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0526 21:32:10.138405  529660 main.go:128] libmachine: Launching plugin server for driver kvm2
	I0526 21:32:10.149039  529660 main.go:128] libmachine: Plugin server listening at address 127.0.0.1:37165
	I0526 21:32:10.149436  529660 main.go:128] libmachine: () Calling .GetVersion
	I0526 21:32:10.149990  529660 main.go:128] libmachine: Using API Version  1
	I0526 21:32:10.150012  529660 main.go:128] libmachine: () Calling .SetConfigRaw
	I0526 21:32:10.150327  529660 main.go:128] libmachine: () Calling .GetMachineName
	I0526 21:32:10.150497  529660 main.go:128] libmachine: (multinode-20210526212238-510955) Calling .GetState
	I0526 21:32:10.153194  529660 status.go:328] multinode-20210526212238-510955 host status = "Stopped" (err=<nil>)
	I0526 21:32:10.153210  529660 status.go:341] host is not running, skipping remaining checks
	I0526 21:32:10.153215  529660 status.go:255] multinode-20210526212238-510955 status: &{Name:multinode-20210526212238-510955 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0526 21:32:10.153248  529660 status.go:253] checking status of multinode-20210526212238-510955-m02 ...
	I0526 21:32:10.153521  529660 main.go:128] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0526 21:32:10.153550  529660 main.go:128] libmachine: Launching plugin server for driver kvm2
	I0526 21:32:10.163793  529660 main.go:128] libmachine: Plugin server listening at address 127.0.0.1:34799
	I0526 21:32:10.164140  529660 main.go:128] libmachine: () Calling .GetVersion
	I0526 21:32:10.164608  529660 main.go:128] libmachine: Using API Version  1
	I0526 21:32:10.164631  529660 main.go:128] libmachine: () Calling .SetConfigRaw
	I0526 21:32:10.164983  529660 main.go:128] libmachine: () Calling .GetMachineName
	I0526 21:32:10.165162  529660 main.go:128] libmachine: (multinode-20210526212238-510955-m02) Calling .GetState
	I0526 21:32:10.167920  529660 status.go:328] multinode-20210526212238-510955-m02 host status = "Stopped" (err=<nil>)
	I0526 21:32:10.167933  529660 status.go:341] host is not running, skipping remaining checks
	I0526 21:32:10.167938  529660 status.go:255] multinode-20210526212238-510955-m02 status: &{Name:multinode-20210526212238-510955-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (184.39s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (319.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:304: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20210526212238-510955 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0526 21:32:50.134829  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/functional-20210526211257-510955/client.crt: no such file or directory
E0526 21:34:19.839626  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/addons-20210526204012-510955/client.crt: no such file or directory
multinode_test.go:304: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20210526212238-510955 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (5m19.282381869s)
multinode_test.go:310: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210526212238-510955 status --alsologtostderr
multinode_test.go:324: (dbg) Run:  kubectl get nodes
multinode_test.go:332: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (319.95s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (68.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:393: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-20210526212238-510955
multinode_test.go:402: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20210526212238-510955-m02 --driver=kvm2  --container-runtime=containerd
multinode_test.go:402: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-20210526212238-510955-m02 --driver=kvm2  --container-runtime=containerd: exit status 14 (104.916075ms)

                                                
                                                
-- stdout --
	* [multinode-20210526212238-510955-m02] minikube v1.20.0 on Debian 9.13 (kvm/amd64)
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/kubeconfig
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube
	  - MINIKUBE_LOCATION=11504
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-20210526212238-510955-m02' is duplicated with machine name 'multinode-20210526212238-510955-m02' in profile 'multinode-20210526212238-510955'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:410: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20210526212238-510955-m03 --driver=kvm2  --container-runtime=containerd
E0526 21:37:50.134649  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/functional-20210526211257-510955/client.crt: no such file or directory
multinode_test.go:410: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20210526212238-510955-m03 --driver=kvm2  --container-runtime=containerd: (1m6.814534342s)
multinode_test.go:417: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-20210526212238-510955
multinode_test.go:417: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-20210526212238-510955: exit status 80 (225.841862ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-20210526212238-510955
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: Node multinode-20210526212238-510955-m03 already exists in multinode-20210526212238-510955-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────╮
	│                                                                             │
	│    * If the above advice does not help, please let us know:                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose               │
	│                                                                             │
	│    * Please attach the following file to the GitHub issue:                  │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log    │
	│                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────╯
	

                                                
                                                
** /stderr **
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-20210526212238-510955-m03
multinode_test.go:422: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-20210526212238-510955-m03: (1.367372246s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (68.57s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian:sid/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian:sid/minikube
--- PASS: TestDebPackageInstall/install_amd64_debian:sid/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian:sid/kvm2-driver (11s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian:sid/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/KVM_Linux_containerd_integration/out:/var/tmp debian:sid sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.20.0-0_amd64.deb"
pkg_install_test.go:104: (dbg) Done: docker run --rm -v/home/jenkins/workspace/KVM_Linux_containerd_integration/out:/var/tmp debian:sid sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.20.0-0_amd64.deb": (11.002048553s)
--- PASS: TestDebPackageInstall/install_amd64_debian:sid/kvm2-driver (11.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian:latest/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian:latest/minikube
--- PASS: TestDebPackageInstall/install_amd64_debian:latest/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian:latest/kvm2-driver (10.19s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian:latest/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/KVM_Linux_containerd_integration/out:/var/tmp debian:latest sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.20.0-0_amd64.deb"
pkg_install_test.go:104: (dbg) Done: docker run --rm -v/home/jenkins/workspace/KVM_Linux_containerd_integration/out:/var/tmp debian:latest sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.20.0-0_amd64.deb": (10.194488933s)
--- PASS: TestDebPackageInstall/install_amd64_debian:latest/kvm2-driver (10.19s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian:10/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian:10/minikube
--- PASS: TestDebPackageInstall/install_amd64_debian:10/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian:10/kvm2-driver (9.61s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian:10/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/KVM_Linux_containerd_integration/out:/var/tmp debian:10 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.20.0-0_amd64.deb"
pkg_install_test.go:104: (dbg) Done: docker run --rm -v/home/jenkins/workspace/KVM_Linux_containerd_integration/out:/var/tmp debian:10 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.20.0-0_amd64.deb": (9.609912404s)
--- PASS: TestDebPackageInstall/install_amd64_debian:10/kvm2-driver (9.61s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian:9/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian:9/minikube
--- PASS: TestDebPackageInstall/install_amd64_debian:9/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian:9/kvm2-driver (8.47s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian:9/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/KVM_Linux_containerd_integration/out:/var/tmp debian:9 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.20.0-0_amd64.deb"
E0526 21:39:13.503976  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/functional-20210526211257-510955/client.crt: no such file or directory
E0526 21:39:19.840328  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/addons-20210526204012-510955/client.crt: no such file or directory
pkg_install_test.go:104: (dbg) Done: docker run --rm -v/home/jenkins/workspace/KVM_Linux_containerd_integration/out:/var/tmp debian:9 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.20.0-0_amd64.deb": (8.471762554s)
--- PASS: TestDebPackageInstall/install_amd64_debian:9/kvm2-driver (8.47s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu:latest/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu:latest/minikube
--- PASS: TestDebPackageInstall/install_amd64_ubuntu:latest/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu:latest/kvm2-driver (14.57s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu:latest/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/KVM_Linux_containerd_integration/out:/var/tmp ubuntu:latest sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.20.0-0_amd64.deb"
pkg_install_test.go:104: (dbg) Done: docker run --rm -v/home/jenkins/workspace/KVM_Linux_containerd_integration/out:/var/tmp ubuntu:latest sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.20.0-0_amd64.deb": (14.566069034s)
--- PASS: TestDebPackageInstall/install_amd64_ubuntu:latest/kvm2-driver (14.57s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu:20.10/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu:20.10/minikube
--- PASS: TestDebPackageInstall/install_amd64_ubuntu:20.10/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu:20.10/kvm2-driver (13.84s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu:20.10/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/KVM_Linux_containerd_integration/out:/var/tmp ubuntu:20.10 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.20.0-0_amd64.deb"
pkg_install_test.go:104: (dbg) Done: docker run --rm -v/home/jenkins/workspace/KVM_Linux_containerd_integration/out:/var/tmp ubuntu:20.10 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.20.0-0_amd64.deb": (13.840614723s)
--- PASS: TestDebPackageInstall/install_amd64_ubuntu:20.10/kvm2-driver (13.84s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu:20.04/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu:20.04/minikube
--- PASS: TestDebPackageInstall/install_amd64_ubuntu:20.04/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu:20.04/kvm2-driver (14s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu:20.04/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/KVM_Linux_containerd_integration/out:/var/tmp ubuntu:20.04 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.20.0-0_amd64.deb"
pkg_install_test.go:104: (dbg) Done: docker run --rm -v/home/jenkins/workspace/KVM_Linux_containerd_integration/out:/var/tmp ubuntu:20.04 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.20.0-0_amd64.deb": (13.998318234s)
--- PASS: TestDebPackageInstall/install_amd64_ubuntu:20.04/kvm2-driver (14.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu:18.04/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu:18.04/minikube
--- PASS: TestDebPackageInstall/install_amd64_ubuntu:18.04/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu:18.04/kvm2-driver (12.98s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu:18.04/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/KVM_Linux_containerd_integration/out:/var/tmp ubuntu:18.04 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.20.0-0_amd64.deb"
pkg_install_test.go:104: (dbg) Done: docker run --rm -v/home/jenkins/workspace/KVM_Linux_containerd_integration/out:/var/tmp ubuntu:18.04 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.20.0-0_amd64.deb": (12.978980893s)
--- PASS: TestDebPackageInstall/install_amd64_ubuntu:18.04/kvm2-driver (12.98s)

                                                
                                    
x
+
TestPreload (185.84s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:48: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-20210526214017-510955 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.17.0
preload_test.go:48: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-20210526214017-510955 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.17.0: (2m11.23158771s)
preload_test.go:61: (dbg) Run:  out/minikube-linux-amd64 ssh -p test-preload-20210526214017-510955 -- sudo crictl pull busybox
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-20210526214017-510955 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.17.3
E0526 21:42:50.135037  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/functional-20210526211257-510955/client.crt: no such file or directory
preload_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-20210526214017-510955 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.17.3: (52.474622709s)
preload_test.go:80: (dbg) Run:  out/minikube-linux-amd64 ssh -p test-preload-20210526214017-510955 -- sudo crictl image ls
helpers_test.go:171: Cleaning up "test-preload-20210526214017-510955" profile ...
helpers_test.go:174: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-20210526214017-510955
helpers_test.go:174: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-20210526214017-510955: (1.115693631s)
--- PASS: TestPreload (185.84s)

                                                
                                    
x
+
TestKubernetesUpgrade (288.29s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20210526215256-510955 --memory=2200 --kubernetes-version=v1.14.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-20210526215256-510955 --memory=2200 --kubernetes-version=v1.14.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m10.913035289s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-20210526215256-510955
E0526 21:54:19.839627  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/addons-20210526204012-510955/client.crt: no such file or directory
version_upgrade_test.go:232: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-20210526215256-510955: (1m32.579685035s)
version_upgrade_test.go:237: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-20210526215256-510955 status --format={{.Host}}
version_upgrade_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-20210526215256-510955 status --format={{.Host}}: exit status 7 (80.241663ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:239: status error: exit status 7 (may be ok)
version_upgrade_test.go:248: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20210526215256-510955 --memory=2200 --kubernetes-version=v1.22.0-alpha.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
E0526 21:55:53.504495  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/functional-20210526211257-510955/client.crt: no such file or directory

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:248: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-20210526215256-510955 --memory=2200 --kubernetes-version=v1.22.0-alpha.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m32.205243058s)
version_upgrade_test.go:253: (dbg) Run:  kubectl --context kubernetes-upgrade-20210526215256-510955 version --output=json
version_upgrade_test.go:272: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:274: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20210526215256-510955 --memory=2200 --kubernetes-version=v1.14.0 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:274: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-20210526215256-510955 --memory=2200 --kubernetes-version=v1.14.0 --driver=kvm2  --container-runtime=containerd: exit status 106 (147.346797ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-20210526215256-510955] minikube v1.20.0 on Debian 9.13 (kvm/amd64)
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/kubeconfig
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube
	  - MINIKUBE_LOCATION=11504
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.22.0-alpha.1 cluster to v1.14.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.14.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-20210526215256-510955
	    minikube start -p kubernetes-upgrade-20210526215256-510955 --kubernetes-version=v1.14.0
	    
	    2) Create a second cluster with Kubernetes 1.14.0, by running:
	    
	    minikube start -p kubernetes-upgrade-20210526215256-5109552 --kubernetes-version=v1.14.0
	    
	    3) Use the existing cluster at version Kubernetes 1.22.0-alpha.1, by running:
	    
	    minikube start -p kubernetes-upgrade-20210526215256-510955 --kubernetes-version=v1.22.0-alpha.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:278: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:280: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20210526215256-510955 --memory=2200 --kubernetes-version=v1.22.0-alpha.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:280: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-20210526215256-510955 --memory=2200 --kubernetes-version=v1.22.0-alpha.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (31.113642411s)
helpers_test.go:171: Cleaning up "kubernetes-upgrade-20210526215256-510955" profile ...
helpers_test.go:174: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-20210526215256-510955
helpers_test.go:174: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-20210526215256-510955: (1.180247538s)
--- PASS: TestKubernetesUpgrade (288.29s)

                                                
                                    
x
+
TestPause/serial/Start (202.08s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:77: (dbg) Run:  out/minikube-linux-amd64 start -p pause-20210526214750-510955 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=containerd

                                                
                                                
=== CONT  TestPause/serial/Start
pause_test.go:77: (dbg) Done: out/minikube-linux-amd64 start -p pause-20210526214750-510955 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=containerd: (3m22.076075587s)
--- PASS: TestPause/serial/Start (202.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:213: (dbg) Run:  out/minikube-linux-amd64 start -p false-20210526215016-510955 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=containerd
net_test.go:213: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-20210526215016-510955 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=containerd: exit status 14 (147.72459ms)

                                                
                                                
-- stdout --
	* [false-20210526215016-510955] minikube v1.20.0 on Debian 9.13 (kvm/amd64)
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/kubeconfig
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube
	  - MINIKUBE_LOCATION=11504
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0526 21:50:16.669085  556980 out.go:291] Setting OutFile to fd 1 ...
	I0526 21:50:16.669274  556980 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0526 21:50:16.669283  556980 out.go:304] Setting ErrFile to fd 2...
	I0526 21:50:16.669286  556980 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0526 21:50:16.669371  556980 root.go:316] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/bin
	I0526 21:50:16.669692  556980 out.go:298] Setting JSON to false
	I0526 21:50:16.707719  556980 start.go:110] hostinfo: {"hostname":"debian-jenkins-agent-4","uptime":19979,"bootTime":1622045838,"procs":179,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-15-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0526 21:50:16.707793  556980 start.go:120] virtualization: kvm guest
	I0526 21:50:16.710287  556980 out.go:170] * [false-20210526215016-510955] minikube v1.20.0 on Debian 9.13 (kvm/amd64)
	I0526 21:50:16.711913  556980 out.go:170]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/kubeconfig
	I0526 21:50:16.713394  556980 out.go:170]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0526 21:50:16.714836  556980 out.go:170]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube
	I0526 21:50:16.716320  556980 out.go:170]   - MINIKUBE_LOCATION=11504
	I0526 21:50:16.716888  556980 driver.go:331] Setting default libvirt URI to qemu:///system
	I0526 21:50:16.748745  556980 out.go:170] * Using the kvm2 driver based on user configuration
	I0526 21:50:16.748770  556980 start.go:278] selected driver: kvm2
	I0526 21:50:16.748777  556980 start.go:751] validating driver "kvm2" against <nil>
	I0526 21:50:16.748793  556980 start.go:762] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0526 21:50:16.750972  556980 out.go:170] 
	W0526 21:50:16.751074  556980 out.go:235] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0526 21:50:16.752464  556980 out.go:170] 

                                                
                                                
** /stderr **
helpers_test.go:171: Cleaning up "false-20210526215016-510955" profile ...
helpers_test.go:174: (dbg) Run:  out/minikube-linux-amd64 delete -p false-20210526215016-510955
--- PASS: TestNetworkPlugins/group/false (0.43s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (5.17s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:89: (dbg) Run:  out/minikube-linux-amd64 start -p pause-20210526214750-510955 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
pause_test.go:89: (dbg) Done: out/minikube-linux-amd64 start -p pause-20210526214750-510955 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (5.157169333s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (5.17s)

                                                
                                    
x
+
TestPause/serial/Pause (0.74s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:107: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-20210526214750-510955 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.74s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.26s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-20210526214750-510955 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-20210526214750-510955 --output=json --layout=cluster: exit status 2 (260.380483ms)

                                                
                                                
-- stdout --
	{"Name":"pause-20210526214750-510955","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.20.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-20210526214750-510955","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.26s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.93s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:118: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-20210526214750-510955 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.93s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (5.65s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:107: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-20210526214750-510955 --alsologtostderr -v=5

                                                
                                                
=== CONT  TestPause/serial/PauseAgain
pause_test.go:107: (dbg) Done: out/minikube-linux-amd64 pause -p pause-20210526214750-510955 --alsologtostderr -v=5: (5.648849902s)
--- PASS: TestPause/serial/PauseAgain (5.65s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.05s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:129: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-20210526214750-510955 --alsologtostderr -v=5
pause_test.go:129: (dbg) Done: out/minikube-linux-amd64 delete -p pause-20210526214750-510955 --alsologtostderr -v=5: (1.054685762s)
--- PASS: TestPause/serial/DeletePaused (1.05s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.27s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:139: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (167.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p auto-20210526215016-510955 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=kvm2  --container-runtime=containerd

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/Start
net_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p auto-20210526215016-510955 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=kvm2  --container-runtime=containerd: (2m47.218103559s)
--- PASS: TestNetworkPlugins/group/auto/Start (167.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:119: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-20210526215016-510955 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:131: (dbg) Run:  kubectl --context auto-20210526215016-510955 replace --force -f testdata/netcat-deployment.yaml
net_test.go:145: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:335: "netcat-66fbc655d5-6q8mv" [d9a38328-c81a-4bdf-af94-53c5e224de10] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:335: "netcat-66fbc655d5-6q8mv" [d9a38328-c81a-4bdf-af94-53c5e224de10] Running
net_test.go:145: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.007944694s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:162: (dbg) Run:  kubectl --context auto-20210526215016-510955 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:181: (dbg) Run:  kubectl --context auto-20210526215016-510955 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:231: (dbg) Run:  kubectl --context auto-20210526215016-510955 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Start (164.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Start
net_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p cilium-20210526215017-510955 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=kvm2  --container-runtime=containerd

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/Start
net_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p cilium-20210526215017-510955 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=kvm2  --container-runtime=containerd: (2m44.532391651s)
--- PASS: TestNetworkPlugins/group/cilium/Start (164.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (158.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p calico-20210526215017-510955 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=kvm2  --container-runtime=containerd
E0526 21:57:50.135105  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/functional-20210526211257-510955/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/Start
net_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p calico-20210526215017-510955 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=kvm2  --container-runtime=containerd: (2m38.803322002s)
--- PASS: TestNetworkPlugins/group/calico/Start (158.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/ControllerPod
net_test.go:106: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: waiting 10m0s for pods matching "k8s-app=cilium" in namespace "kube-system" ...
helpers_test.go:335: "cilium-nc7zw" [7705166d-418b-43a1-b3b7-6145657d6004] Running
E0526 21:59:19.839848  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/addons-20210526204012-510955/client.crt: no such file or directory
net_test.go:106: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: k8s-app=cilium healthy within 5.024379309s
--- PASS: TestNetworkPlugins/group/cilium/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/KubeletFlags
net_test.go:119: (dbg) Run:  out/minikube-linux-amd64 ssh -p cilium-20210526215017-510955 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/cilium/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/NetCatPod (10.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/NetCatPod
net_test.go:131: (dbg) Run:  kubectl --context cilium-20210526215017-510955 replace --force -f testdata/netcat-deployment.yaml
net_test.go:145: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:335: "netcat-66fbc655d5-k9rdf" [7f8a14df-19e2-4743-a820-91e517270f70] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:335: "netcat-66fbc655d5-k9rdf" [7f8a14df-19e2-4743-a820-91e517270f70] Running
net_test.go:145: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: app=netcat healthy within 10.011217123s
--- PASS: TestNetworkPlugins/group/cilium/NetCatPod (10.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/DNS (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/DNS
net_test.go:162: (dbg) Run:  kubectl --context cilium-20210526215017-510955 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/cilium/DNS (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Localhost (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Localhost
net_test.go:181: (dbg) Run:  kubectl --context cilium-20210526215017-510955 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/cilium/Localhost (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/HairPin
net_test.go:231: (dbg) Run:  kubectl --context cilium-20210526215017-510955 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/cilium/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-weave/Start (172.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-weave/Start
net_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p custom-weave-20210526215017-510955 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=testdata/weavenet.yaml --driver=kvm2  --container-runtime=containerd

                                                
                                                
=== CONT  TestNetworkPlugins/group/custom-weave/Start
net_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p custom-weave-20210526215017-510955 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=testdata/weavenet.yaml --driver=kvm2  --container-runtime=containerd: (2m52.523916669s)
--- PASS: TestNetworkPlugins/group/custom-weave/Start (172.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:106: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:335: "calico-node-sz28n" [1ee2be4f-255e-4ea9-adcc-7277ee808092] Running
net_test.go:106: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.029046353s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:119: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-20210526215017-510955 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (13.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:131: (dbg) Run:  kubectl --context calico-20210526215017-510955 replace --force -f testdata/netcat-deployment.yaml
net_test.go:145: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:335: "netcat-66fbc655d5-7rnnw" [005e07df-114b-4b7f-8f72-94686102fdb9] Pending
helpers_test.go:335: "netcat-66fbc655d5-7rnnw" [005e07df-114b-4b7f-8f72-94686102fdb9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:335: "netcat-66fbc655d5-7rnnw" [005e07df-114b-4b7f-8f72-94686102fdb9] Running
net_test.go:145: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 13.020443205s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (13.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:162: (dbg) Run:  kubectl --context calico-20210526215017-510955 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:181: (dbg) Run:  kubectl --context calico-20210526215017-510955 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:231: (dbg) Run:  kubectl --context calico-20210526215017-510955 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (178.07s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-20210526215016-510955 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=kvm2  --container-runtime=containerd
E0526 22:01:21.996636  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/auto-20210526215016-510955/client.crt: no such file or directory
E0526 22:01:22.001957  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/auto-20210526215016-510955/client.crt: no such file or directory
E0526 22:01:22.012219  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/auto-20210526215016-510955/client.crt: no such file or directory
E0526 22:01:22.032516  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/auto-20210526215016-510955/client.crt: no such file or directory
E0526 22:01:22.072829  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/auto-20210526215016-510955/client.crt: no such file or directory
E0526 22:01:22.153188  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/auto-20210526215016-510955/client.crt: no such file or directory
E0526 22:01:22.313608  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/auto-20210526215016-510955/client.crt: no such file or directory
E0526 22:01:22.633885  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/auto-20210526215016-510955/client.crt: no such file or directory
E0526 22:01:23.274064  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/auto-20210526215016-510955/client.crt: no such file or directory
E0526 22:01:24.555114  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/auto-20210526215016-510955/client.crt: no such file or directory
E0526 22:01:27.116209  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/auto-20210526215016-510955/client.crt: no such file or directory
E0526 22:01:32.236954  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/auto-20210526215016-510955/client.crt: no such file or directory
E0526 22:01:42.477912  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/auto-20210526215016-510955/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/Start
net_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-20210526215016-510955 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=kvm2  --container-runtime=containerd: (2m58.071999291s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (178.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (174.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-20210526215016-510955 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=flannel --driver=kvm2  --container-runtime=containerd

                                                
                                                
=== CONT  TestNetworkPlugins/group/flannel/Start
net_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p flannel-20210526215016-510955 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=flannel --driver=kvm2  --container-runtime=containerd: (2m54.54954038s)
--- PASS: TestNetworkPlugins/group/flannel/Start (174.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-weave/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-weave/KubeletFlags
net_test.go:119: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-weave-20210526215017-510955 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-weave/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-weave/NetCatPod (12.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-weave/NetCatPod
net_test.go:131: (dbg) Run:  kubectl --context custom-weave-20210526215017-510955 replace --force -f testdata/netcat-deployment.yaml
net_test.go:145: (dbg) TestNetworkPlugins/group/custom-weave/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:335: "netcat-66fbc655d5-w4ggf" [3bfa566b-2bda-493e-a45e-4c1b20b90817] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:335: "netcat-66fbc655d5-w4ggf" [3bfa566b-2bda-493e-a45e-4c1b20b90817] Running
net_test.go:145: (dbg) TestNetworkPlugins/group/custom-weave/NetCatPod: app=netcat healthy within 12.14722481s
--- PASS: TestNetworkPlugins/group/custom-weave/NetCatPod (12.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (182.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-20210526215016-510955 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=kvm2  --container-runtime=containerd
E0526 22:02:43.919295  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/auto-20210526215016-510955/client.crt: no such file or directory
E0526 22:02:50.134994  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/functional-20210526211257-510955/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-20210526215016-510955 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=kvm2  --container-runtime=containerd: (3m2.882137059s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (182.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:106: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:335: "kindnet-rc2gm" [0a43c7e3-96f9-44e6-82a2-e9da8f109f01] Running
net_test.go:106: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.021488794s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:119: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-20210526215016-510955 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:131: (dbg) Run:  kubectl --context kindnet-20210526215016-510955 replace --force -f testdata/netcat-deployment.yaml
net_test.go:145: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:335: "netcat-66fbc655d5-thtf4" [26a46581-9eed-4164-9779-2f62304f8db6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:335: "netcat-66fbc655d5-thtf4" [26a46581-9eed-4164-9779-2f62304f8db6] Running
net_test.go:145: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.014978988s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:162: (dbg) Run:  kubectl --context kindnet-20210526215016-510955 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:181: (dbg) Run:  kubectl --context kindnet-20210526215016-510955 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:231: (dbg) Run:  kubectl --context kindnet-20210526215016-510955 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (163.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-20210526215016-510955 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=kvm2  --container-runtime=containerd
E0526 22:04:02.888753  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/addons-20210526204012-510955/client.crt: no such file or directory
E0526 22:04:05.840092  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/auto-20210526215016-510955/client.crt: no such file or directory
E0526 22:04:17.450689  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/cilium-20210526215017-510955/client.crt: no such file or directory
E0526 22:04:17.456718  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/cilium-20210526215017-510955/client.crt: no such file or directory
E0526 22:04:17.466971  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/cilium-20210526215017-510955/client.crt: no such file or directory
E0526 22:04:17.487205  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/cilium-20210526215017-510955/client.crt: no such file or directory
E0526 22:04:17.528276  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/cilium-20210526215017-510955/client.crt: no such file or directory
E0526 22:04:17.609277  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/cilium-20210526215017-510955/client.crt: no such file or directory
E0526 22:04:17.769903  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/cilium-20210526215017-510955/client.crt: no such file or directory
E0526 22:04:18.090584  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/cilium-20210526215017-510955/client.crt: no such file or directory
E0526 22:04:18.731481  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/cilium-20210526215017-510955/client.crt: no such file or directory
E0526 22:04:19.840240  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/addons-20210526204012-510955/client.crt: no such file or directory
E0526 22:04:20.012047  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/cilium-20210526215017-510955/client.crt: no such file or directory
E0526 22:04:22.572917  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/cilium-20210526215017-510955/client.crt: no such file or directory
E0526 22:04:27.693887  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/cilium-20210526215017-510955/client.crt: no such file or directory
E0526 22:04:37.934798  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/cilium-20210526215017-510955/client.crt: no such file or directory
E0526 22:04:58.415206  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/cilium-20210526215017-510955/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/Start
net_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p bridge-20210526215016-510955 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=kvm2  --container-runtime=containerd: (2m43.874035203s)
--- PASS: TestNetworkPlugins/group/bridge/Start (163.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:106: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-system" ...
helpers_test.go:335: "kube-flannel-ds-amd64-fcccl" [04d3266f-c506-44c4-ad3e-5e94ae44ad55] Running
net_test.go:106: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.021476999s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:119: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-20210526215016-510955 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:131: (dbg) Run:  kubectl --context flannel-20210526215016-510955 replace --force -f testdata/netcat-deployment.yaml
net_test.go:145: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:335: "netcat-66fbc655d5-zc2m2" [21e9c04c-13e2-4998-8b71-fc098bada28f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:335: "netcat-66fbc655d5-zc2m2" [21e9c04c-13e2-4998-8b71-fc098bada28f] Running
net_test.go:145: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.010419192s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:162: (dbg) Run:  kubectl --context flannel-20210526215016-510955 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:181: (dbg) Run:  kubectl --context flannel-20210526215016-510955 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:231: (dbg) Run:  kubectl --context flannel-20210526215016-510955 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (152.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-20210526220515-510955 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.14.0

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-20210526220515-510955 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.14.0: (2m32.220387299s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (152.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (181.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-20210526220518-510955 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.22.0-alpha.1
E0526 22:05:23.871701  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/calico-20210526215017-510955/client.crt: no such file or directory
E0526 22:05:23.877246  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/calico-20210526215017-510955/client.crt: no such file or directory
E0526 22:05:23.887564  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/calico-20210526215017-510955/client.crt: no such file or directory
E0526 22:05:23.907807  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/calico-20210526215017-510955/client.crt: no such file or directory
E0526 22:05:23.948562  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/calico-20210526215017-510955/client.crt: no such file or directory
E0526 22:05:24.028909  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/calico-20210526215017-510955/client.crt: no such file or directory
E0526 22:05:24.190052  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/calico-20210526215017-510955/client.crt: no such file or directory
E0526 22:05:24.510879  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/calico-20210526215017-510955/client.crt: no such file or directory
E0526 22:05:25.151520  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/calico-20210526215017-510955/client.crt: no such file or directory
E0526 22:05:26.432322  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/calico-20210526215017-510955/client.crt: no such file or directory
E0526 22:05:28.993262  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/calico-20210526215017-510955/client.crt: no such file or directory
E0526 22:05:34.114123  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/calico-20210526215017-510955/client.crt: no such file or directory
E0526 22:05:39.375601  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/cilium-20210526215017-510955/client.crt: no such file or directory
E0526 22:05:44.354711  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/calico-20210526215017-510955/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-20210526220518-510955 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.22.0-alpha.1: (3m1.186872633s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (181.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:119: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-20210526215016-510955 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (18.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:131: (dbg) Run:  kubectl --context enable-default-cni-20210526215016-510955 replace --force -f testdata/netcat-deployment.yaml
net_test.go:145: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:335: "netcat-66fbc655d5-xw2d6" [a2df4cfe-a1ea-442c-8a09-7ccd8eb9b7e2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:335: "netcat-66fbc655d5-xw2d6" [a2df4cfe-a1ea-442c-8a09-7ccd8eb9b7e2] Running
net_test.go:145: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 18.01094884s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (18.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:162: (dbg) Run:  kubectl --context enable-default-cni-20210526215016-510955 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:181: (dbg) Run:  kubectl --context enable-default-cni-20210526215016-510955 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:231: (dbg) Run:  kubectl --context enable-default-cni-20210526215016-510955 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E0526 22:06:04.835149  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/calico-20210526215017-510955/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (188.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-20210526220606-510955 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.20.2
E0526 22:06:21.996492  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/auto-20210526215016-510955/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-20210526220606-510955 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.20.2: (3m8.32214129s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (188.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (1.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:119: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-20210526215016-510955 "pgrep -a kubelet"
net_test.go:119: (dbg) Done: out/minikube-linux-amd64 ssh -p bridge-20210526215016-510955 "pgrep -a kubelet": (1.622926088s)
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (1.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:131: (dbg) Run:  kubectl --context bridge-20210526215016-510955 replace --force -f testdata/netcat-deployment.yaml
E0526 22:06:45.796083  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/calico-20210526215017-510955/client.crt: no such file or directory
net_test.go:131: (dbg) Done: kubectl --context bridge-20210526215016-510955 replace --force -f testdata/netcat-deployment.yaml: (1.123043143s)
net_test.go:145: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:335: "netcat-66fbc655d5-wtzbq" [6d4e5555-c2c4-48c0-a0be-1d729ccdd78c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0526 22:06:49.680649  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/auto-20210526215016-510955/client.crt: no such file or directory
helpers_test.go:335: "netcat-66fbc655d5-wtzbq" [6d4e5555-c2c4-48c0-a0be-1d729ccdd78c] Running
net_test.go:145: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.011850401s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:162: (dbg) Run:  kubectl --context bridge-20210526215016-510955 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:181: (dbg) Run:  kubectl --context bridge-20210526215016-510955 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:231: (dbg) Run:  kubectl --context bridge-20210526215016-510955 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.19s)
E0526 22:17:13.578474  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/bridge-20210526215016-510955/client.crt: no such file or directory

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/FirstStart (148.63s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-different-port-20210526220657-510955 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.20.2
E0526 22:07:01.296817  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/cilium-20210526215017-510955/client.crt: no such file or directory
E0526 22:07:28.758054  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/custom-weave-20210526215017-510955/client.crt: no such file or directory
E0526 22:07:28.763527  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/custom-weave-20210526215017-510955/client.crt: no such file or directory
E0526 22:07:28.774015  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/custom-weave-20210526215017-510955/client.crt: no such file or directory
E0526 22:07:28.794327  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/custom-weave-20210526215017-510955/client.crt: no such file or directory
E0526 22:07:28.834643  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/custom-weave-20210526215017-510955/client.crt: no such file or directory
E0526 22:07:28.915697  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/custom-weave-20210526215017-510955/client.crt: no such file or directory
E0526 22:07:29.076334  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/custom-weave-20210526215017-510955/client.crt: no such file or directory
E0526 22:07:29.396952  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/custom-weave-20210526215017-510955/client.crt: no such file or directory
E0526 22:07:30.037683  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/custom-weave-20210526215017-510955/client.crt: no such file or directory
E0526 22:07:34.405671  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/custom-weave-20210526215017-510955/client.crt: no such file or directory
E0526 22:07:36.966165  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/custom-weave-20210526215017-510955/client.crt: no such file or directory
E0526 22:07:42.086863  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/custom-weave-20210526215017-510955/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-different-port-20210526220657-510955 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.20.2: (2m28.62704014s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/FirstStart (148.63s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.71s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context old-k8s-version-20210526220515-510955 create -f testdata/busybox.yaml
start_stop_delete_test.go:169: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:335: "busybox" [ce2ab27b-be6e-11eb-b5dc-525400723a4b] Pending
helpers_test.go:335: "busybox" [ce2ab27b-be6e-11eb-b5dc-525400723a4b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:335: "busybox" [ce2ab27b-be6e-11eb-b5dc-525400723a4b] Running
E0526 22:07:50.134728  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/functional-20210526211257-510955/client.crt: no such file or directory
E0526 22:07:52.327119  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/custom-weave-20210526215017-510955/client.crt: no such file or directory
start_stop_delete_test.go:169: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.029043433s
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context old-k8s-version-20210526220515-510955 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.71s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-20210526220515-510955 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:188: (dbg) Run:  kubectl --context old-k8s-version-20210526220515-510955 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (92.51s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:201: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-20210526220515-510955 --alsologtostderr -v=3
E0526 22:08:07.717189  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/calico-20210526215017-510955/client.crt: no such file or directory
E0526 22:08:12.807843  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/custom-weave-20210526215017-510955/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:201: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-20210526220515-510955 --alsologtostderr -v=3: (1m32.505269097s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (92.51s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.61s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context no-preload-20210526220518-510955 create -f testdata/busybox.yaml
start_stop_delete_test.go:169: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:335: "busybox" [f9d3ce32-74d9-4884-b767-c2a35e551c86] Pending
helpers_test.go:335: "busybox" [f9d3ce32-74d9-4884-b767-c2a35e551c86] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:335: "busybox" [f9d3ce32-74d9-4884-b767-c2a35e551c86] Running
start_stop_delete_test.go:169: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.026478952s
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context no-preload-20210526220518-510955 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.61s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.04s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-20210526220518-510955 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:188: (dbg) Run:  kubectl --context no-preload-20210526220518-510955 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.04s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (93.51s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:201: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-20210526220518-510955 --alsologtostderr -v=3
E0526 22:08:42.590897  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/kindnet-20210526215016-510955/client.crt: no such file or directory
E0526 22:08:42.596270  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/kindnet-20210526215016-510955/client.crt: no such file or directory
E0526 22:08:42.606613  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/kindnet-20210526215016-510955/client.crt: no such file or directory
E0526 22:08:42.626959  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/kindnet-20210526215016-510955/client.crt: no such file or directory
E0526 22:08:42.667277  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/kindnet-20210526215016-510955/client.crt: no such file or directory
E0526 22:08:42.747846  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/kindnet-20210526215016-510955/client.crt: no such file or directory
E0526 22:08:42.908462  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/kindnet-20210526215016-510955/client.crt: no such file or directory
E0526 22:08:43.229137  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/kindnet-20210526215016-510955/client.crt: no such file or directory
E0526 22:08:43.869713  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/kindnet-20210526215016-510955/client.crt: no such file or directory
E0526 22:08:45.150142  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/kindnet-20210526215016-510955/client.crt: no such file or directory
E0526 22:08:47.711156  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/kindnet-20210526215016-510955/client.crt: no such file or directory
E0526 22:08:52.832252  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/kindnet-20210526215016-510955/client.crt: no such file or directory
E0526 22:08:53.768179  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/custom-weave-20210526215017-510955/client.crt: no such file or directory
E0526 22:09:03.072624  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/kindnet-20210526215016-510955/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:201: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-20210526220518-510955 --alsologtostderr -v=3: (1m33.511113082s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (93.51s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.65s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context embed-certs-20210526220606-510955 create -f testdata/busybox.yaml
start_stop_delete_test.go:169: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:335: "busybox" [f5cfe6f9-917b-4e68-8a59-835fceabdffa] Pending
helpers_test.go:335: "busybox" [f5cfe6f9-917b-4e68-8a59-835fceabdffa] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0526 22:09:17.450827  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/cilium-20210526215017-510955/client.crt: no such file or directory
helpers_test.go:335: "busybox" [f5cfe6f9-917b-4e68-8a59-835fceabdffa] Running
E0526 22:09:19.840413  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/addons-20210526204012-510955/client.crt: no such file or directory
start_stop_delete_test.go:169: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.047841222s
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context embed-certs-20210526220606-510955 exec busybox -- /bin/sh -c "ulimit -n"
E0526 22:09:23.553216  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/kindnet-20210526215016-510955/client.crt: no such file or directory
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.65s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.81s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-20210526220606-510955 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:188: (dbg) Run:  kubectl --context embed-certs-20210526220606-510955 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.81s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (92.51s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:201: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-20210526220606-510955 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:201: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-20210526220606-510955 --alsologtostderr -v=3: (1m32.507041463s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (92.51s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/DeployApp (7.59s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/DeployApp
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context default-k8s-different-port-20210526220657-510955 create -f testdata/busybox.yaml
start_stop_delete_test.go:169: (dbg) TestStartStop/group/default-k8s-different-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:335: "busybox" [5a7473f0-a9a0-426f-a9f0-08f426f2dd09] Pending
helpers_test.go:335: "busybox" [5a7473f0-a9a0-426f-a9f0-08f426f2dd09] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:335: "busybox" [5a7473f0-a9a0-426f-a9f0-08f426f2dd09] Running

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/DeployApp
start_stop_delete_test.go:169: (dbg) TestStartStop/group/default-k8s-different-port/serial/DeployApp: integration-test=busybox healthy within 7.026014497s
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context default-k8s-different-port-20210526220657-510955 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-different-port/serial/DeployApp (7.59s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:212: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20210526220515-510955 -n old-k8s-version-20210526220515-510955
start_stop_delete_test.go:212: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20210526220515-510955 -n old-k8s-version-20210526220515-510955: exit status 7 (74.902438ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:212: status error: exit status 7 (may be ok)
start_stop_delete_test.go:219: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-20210526220515-510955 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (441.52s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-20210526220515-510955 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.14.0

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-20210526220515-510955 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.14.0: (7m21.230671877s)
start_stop_delete_test.go:235: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20210526220515-510955 -n old-k8s-version-20210526220515-510955
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (441.52s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (0.82s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-different-port-20210526220657-510955 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:188: (dbg) Run:  kubectl --context default-k8s-different-port-20210526220657-510955 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (0.82s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/Stop (92.51s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/Stop
start_stop_delete_test.go:201: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-different-port-20210526220657-510955 --alsologtostderr -v=3
E0526 22:09:45.137162  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/cilium-20210526215017-510955/client.crt: no such file or directory
E0526 22:09:58.607184  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/flannel-20210526215016-510955/client.crt: no such file or directory
E0526 22:09:58.612455  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/flannel-20210526215016-510955/client.crt: no such file or directory
E0526 22:09:58.622723  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/flannel-20210526215016-510955/client.crt: no such file or directory
E0526 22:09:58.643028  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/flannel-20210526215016-510955/client.crt: no such file or directory
E0526 22:09:58.683274  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/flannel-20210526215016-510955/client.crt: no such file or directory
E0526 22:09:58.763779  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/flannel-20210526215016-510955/client.crt: no such file or directory
E0526 22:09:58.924784  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/flannel-20210526215016-510955/client.crt: no such file or directory
E0526 22:09:59.245329  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/flannel-20210526215016-510955/client.crt: no such file or directory
E0526 22:09:59.886062  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/flannel-20210526215016-510955/client.crt: no such file or directory
E0526 22:10:02.823734  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/flannel-20210526215016-510955/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/Stop
start_stop_delete_test.go:201: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-different-port-20210526220657-510955 --alsologtostderr -v=3: (1m32.511121587s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/Stop (92.51s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:212: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20210526220518-510955 -n no-preload-20210526220518-510955
start_stop_delete_test.go:212: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20210526220518-510955 -n no-preload-20210526220518-510955: exit status 7 (71.030705ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:212: status error: exit status 7 (may be ok)
start_stop_delete_test.go:219: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-20210526220518-510955 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (327.91s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-20210526220518-510955 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.22.0-alpha.1
E0526 22:10:04.513510  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/kindnet-20210526215016-510955/client.crt: no such file or directory
E0526 22:10:05.384822  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/flannel-20210526215016-510955/client.crt: no such file or directory
E0526 22:10:10.505661  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/flannel-20210526215016-510955/client.crt: no such file or directory
E0526 22:10:15.689055  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/custom-weave-20210526215017-510955/client.crt: no such file or directory
E0526 22:10:20.746448  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/flannel-20210526215016-510955/client.crt: no such file or directory
E0526 22:10:23.871723  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/calico-20210526215017-510955/client.crt: no such file or directory
E0526 22:10:41.227583  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/flannel-20210526215016-510955/client.crt: no such file or directory
E0526 22:10:45.868006  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/enable-default-cni-20210526215016-510955/client.crt: no such file or directory
E0526 22:10:45.873374  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/enable-default-cni-20210526215016-510955/client.crt: no such file or directory
E0526 22:10:45.883618  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/enable-default-cni-20210526215016-510955/client.crt: no such file or directory
E0526 22:10:45.904096  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/enable-default-cni-20210526215016-510955/client.crt: no such file or directory
E0526 22:10:45.944501  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/enable-default-cni-20210526215016-510955/client.crt: no such file or directory
E0526 22:10:46.025213  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/enable-default-cni-20210526215016-510955/client.crt: no such file or directory
E0526 22:10:46.186161  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/enable-default-cni-20210526215016-510955/client.crt: no such file or directory
E0526 22:10:46.507051  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/enable-default-cni-20210526215016-510955/client.crt: no such file or directory
E0526 22:10:47.147970  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/enable-default-cni-20210526215016-510955/client.crt: no such file or directory
E0526 22:10:48.428928  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/enable-default-cni-20210526215016-510955/client.crt: no such file or directory
E0526 22:10:50.989738  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/enable-default-cni-20210526215016-510955/client.crt: no such file or directory
E0526 22:10:51.558128  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/calico-20210526215017-510955/client.crt: no such file or directory
E0526 22:10:56.110895  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/enable-default-cni-20210526215016-510955/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-20210526220518-510955 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.22.0-alpha.1: (5m27.603723774s)
start_stop_delete_test.go:235: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20210526220518-510955 -n no-preload-20210526220518-510955
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (327.91s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:212: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20210526220606-510955 -n embed-certs-20210526220606-510955
start_stop_delete_test.go:212: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20210526220606-510955 -n embed-certs-20210526220606-510955: exit status 7 (77.463556ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:212: status error: exit status 7 (may be ok)
start_stop_delete_test.go:219: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-20210526220606-510955 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (455.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-20210526220606-510955 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.20.2
E0526 22:11:06.351285  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/enable-default-cni-20210526215016-510955/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-20210526220606-510955 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.20.2: (7m34.745716968s)
start_stop_delete_test.go:235: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20210526220606-510955 -n embed-certs-20210526220606-510955
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (455.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:212: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20210526220657-510955 -n default-k8s-different-port-20210526220657-510955
start_stop_delete_test.go:212: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20210526220657-510955 -n default-k8s-different-port-20210526220657-510955: exit status 7 (82.197138ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:212: status error: exit status 7 (may be ok)
start_stop_delete_test.go:219: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-different-port-20210526220657-510955 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/SecondStart (529.56s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-different-port-20210526220657-510955 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.20.2
E0526 22:11:21.997079  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/auto-20210526215016-510955/client.crt: no such file or directory
E0526 22:11:22.188682  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/flannel-20210526215016-510955/client.crt: no such file or directory
E0526 22:11:26.433946  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/kindnet-20210526215016-510955/client.crt: no such file or directory
E0526 22:11:26.831777  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/enable-default-cni-20210526215016-510955/client.crt: no such file or directory
E0526 22:11:45.892735  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/bridge-20210526215016-510955/client.crt: no such file or directory
E0526 22:11:45.898047  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/bridge-20210526215016-510955/client.crt: no such file or directory
E0526 22:11:45.908276  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/bridge-20210526215016-510955/client.crt: no such file or directory
E0526 22:11:45.928538  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/bridge-20210526215016-510955/client.crt: no such file or directory
E0526 22:11:45.968893  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/bridge-20210526215016-510955/client.crt: no such file or directory
E0526 22:11:46.049489  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/bridge-20210526215016-510955/client.crt: no such file or directory
E0526 22:11:46.210436  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/bridge-20210526215016-510955/client.crt: no such file or directory
E0526 22:11:46.531493  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/bridge-20210526215016-510955/client.crt: no such file or directory
E0526 22:11:47.172419  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/bridge-20210526215016-510955/client.crt: no such file or directory
E0526 22:11:48.452658  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/bridge-20210526215016-510955/client.crt: no such file or directory
E0526 22:11:51.013632  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/bridge-20210526215016-510955/client.crt: no such file or directory
E0526 22:11:56.134797  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/bridge-20210526215016-510955/client.crt: no such file or directory
E0526 22:12:06.375719  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/bridge-20210526215016-510955/client.crt: no such file or directory
E0526 22:12:09.643063  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/enable-default-cni-20210526215016-510955/client.crt: no such file or directory
E0526 22:12:26.855917  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/bridge-20210526215016-510955/client.crt: no such file or directory
E0526 22:12:28.757292  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/custom-weave-20210526215017-510955/client.crt: no such file or directory
E0526 22:12:33.505467  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/functional-20210526211257-510955/client.crt: no such file or directory
E0526 22:12:44.108912  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/flannel-20210526215016-510955/client.crt: no such file or directory
E0526 22:12:50.135280  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/functional-20210526211257-510955/client.crt: no such file or directory
E0526 22:12:59.529258  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/custom-weave-20210526215017-510955/client.crt: no such file or directory
E0526 22:13:07.816397  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/bridge-20210526215016-510955/client.crt: no such file or directory
E0526 22:13:31.563661  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/enable-default-cni-20210526215016-510955/client.crt: no such file or directory
E0526 22:13:42.590956  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/kindnet-20210526215016-510955/client.crt: no such file or directory
E0526 22:14:12.521623  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/kindnet-20210526215016-510955/client.crt: no such file or directory
E0526 22:14:17.450949  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/cilium-20210526215017-510955/client.crt: no such file or directory
E0526 22:14:19.840398  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/addons-20210526204012-510955/client.crt: no such file or directory
E0526 22:14:29.737570  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/bridge-20210526215016-510955/client.crt: no such file or directory
E0526 22:14:58.607156  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/flannel-20210526215016-510955/client.crt: no such file or directory
E0526 22:15:23.871113  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/calico-20210526215017-510955/client.crt: no such file or directory
E0526 22:15:27.949740  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/flannel-20210526215016-510955/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-different-port-20210526220657-510955 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.20.2: (8m49.29202001s)
start_stop_delete_test.go:235: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20210526220657-510955 -n default-k8s-different-port-20210526220657-510955
--- PASS: TestStartStop/group/default-k8s-different-port/serial/SecondStart (529.56s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (12.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:247: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:335: "kubernetes-dashboard-6fcdf4f6d-qnf72" [e4503b41-d14c-4fcc-af15-6bf21447121b] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:335: "kubernetes-dashboard-6fcdf4f6d-qnf72" [e4503b41-d14c-4fcc-af15-6bf21447121b] Running
start_stop_delete_test.go:247: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 12.023887021s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (12.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:260: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:335: "kubernetes-dashboard-6fcdf4f6d-qnf72" [e4503b41-d14c-4fcc-af15-6bf21447121b] Running
E0526 22:15:45.867122  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/enable-default-cni-20210526215016-510955/client.crt: no such file or directory
start_stop_delete_test.go:260: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.008995414s
start_stop_delete_test.go:264: (dbg) Run:  kubectl --context no-preload-20210526220518-510955 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:277: (dbg) Run:  out/minikube-linux-amd64 ssh -p no-preload-20210526220518-510955 "sudo crictl images -o json"
start_stop_delete_test.go:277: Found non-minikube image: library/busybox:1.28.4-glibc
start_stop_delete_test.go:277: Found non-minikube image: library/minikube-local-cache-test:functional-20210526211257-510955
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.68s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-20210526220518-510955 --alsologtostderr -v=1
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-20210526220518-510955 -n no-preload-20210526220518-510955
start_stop_delete_test.go:284: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-20210526220518-510955 -n no-preload-20210526220518-510955: exit status 2 (254.3687ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:284: status error: exit status 2 (may be ok)
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-20210526220518-510955 -n no-preload-20210526220518-510955
start_stop_delete_test.go:284: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-20210526220518-510955 -n no-preload-20210526220518-510955: exit status 2 (255.789354ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:284: status error: exit status 2 (may be ok)
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-20210526220518-510955 --alsologtostderr -v=1
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-20210526220518-510955 -n no-preload-20210526220518-510955
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-20210526220518-510955 -n no-preload-20210526220518-510955
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.68s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (81.03s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-20210526221553-510955 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.22.0-alpha.1
E0526 22:16:15.404195  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/enable-default-cni-20210526215016-510955/client.crt: no such file or directory
E0526 22:16:21.996915  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/auto-20210526215016-510955/client.crt: no such file or directory
E0526 22:16:45.892297  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/bridge-20210526215016-510955/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-20210526221553-510955 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.22.0-alpha.1: (1m21.034686599s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (81.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:247: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:335: "kubernetes-dashboard-5d8978d65d-mds5d" [e7f1e9d6-be6f-11eb-b77d-525400723a4b] Running
start_stop_delete_test.go:247: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.02088038s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:260: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:335: "kubernetes-dashboard-5d8978d65d-mds5d" [e7f1e9d6-be6f-11eb-b77d-525400723a4b] Running
start_stop_delete_test.go:260: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.008077885s
start_stop_delete_test.go:264: (dbg) Run:  kubectl --context old-k8s-version-20210526220515-510955 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:277: (dbg) Run:  out/minikube-linux-amd64 ssh -p old-k8s-version-20210526220515-510955 "sudo crictl images -o json"
start_stop_delete_test.go:277: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:277: Found non-minikube image: library/busybox:1.28.4-glibc
start_stop_delete_test.go:277: Found non-minikube image: library/minikube-local-cache-test:functional-20210526211257-510955
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.69s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-20210526220515-510955 --alsologtostderr -v=1
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20210526220515-510955 -n old-k8s-version-20210526220515-510955
start_stop_delete_test.go:284: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20210526220515-510955 -n old-k8s-version-20210526220515-510955: exit status 2 (260.903184ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:284: status error: exit status 2 (may be ok)
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-20210526220515-510955 -n old-k8s-version-20210526220515-510955
start_stop_delete_test.go:284: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-20210526220515-510955 -n old-k8s-version-20210526220515-510955: exit status 2 (249.12276ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:284: status error: exit status 2 (may be ok)
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-20210526220515-510955 --alsologtostderr -v=1
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20210526220515-510955 -n old-k8s-version-20210526220515-510955
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-20210526220515-510955 -n old-k8s-version-20210526220515-510955
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.69s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.98s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-20210526221553-510955 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:184: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.98s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (92.52s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:201: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-20210526221553-510955 --alsologtostderr -v=3
E0526 22:17:28.757922  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/custom-weave-20210526215017-510955/client.crt: no such file or directory
E0526 22:17:45.041088  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/auto-20210526215016-510955/client.crt: no such file or directory
E0526 22:17:47.979286  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/old-k8s-version-20210526220515-510955/client.crt: no such file or directory
E0526 22:17:47.984678  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/old-k8s-version-20210526220515-510955/client.crt: no such file or directory
E0526 22:17:47.994918  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/old-k8s-version-20210526220515-510955/client.crt: no such file or directory
E0526 22:17:48.015192  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/old-k8s-version-20210526220515-510955/client.crt: no such file or directory
E0526 22:17:48.055407  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/old-k8s-version-20210526220515-510955/client.crt: no such file or directory
E0526 22:17:48.136330  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/old-k8s-version-20210526220515-510955/client.crt: no such file or directory
E0526 22:17:48.296790  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/old-k8s-version-20210526220515-510955/client.crt: no such file or directory
E0526 22:17:48.616925  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/old-k8s-version-20210526220515-510955/client.crt: no such file or directory
E0526 22:17:49.257331  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/old-k8s-version-20210526220515-510955/client.crt: no such file or directory
E0526 22:17:50.134659  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/functional-20210526211257-510955/client.crt: no such file or directory
E0526 22:17:50.538435  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/old-k8s-version-20210526220515-510955/client.crt: no such file or directory
E0526 22:17:53.099143  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/old-k8s-version-20210526220515-510955/client.crt: no such file or directory
E0526 22:17:58.219807  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/old-k8s-version-20210526220515-510955/client.crt: no such file or directory
E0526 22:18:08.460142  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/old-k8s-version-20210526220515-510955/client.crt: no such file or directory
E0526 22:18:20.060995  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/no-preload-20210526220518-510955/client.crt: no such file or directory
E0526 22:18:20.066397  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/no-preload-20210526220518-510955/client.crt: no such file or directory
E0526 22:18:20.076754  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/no-preload-20210526220518-510955/client.crt: no such file or directory
E0526 22:18:20.097187  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/no-preload-20210526220518-510955/client.crt: no such file or directory
E0526 22:18:20.137802  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/no-preload-20210526220518-510955/client.crt: no such file or directory
E0526 22:18:20.218265  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/no-preload-20210526220518-510955/client.crt: no such file or directory
E0526 22:18:20.379356  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/no-preload-20210526220518-510955/client.crt: no such file or directory
E0526 22:18:20.700197  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/no-preload-20210526220518-510955/client.crt: no such file or directory
E0526 22:18:21.341191  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/no-preload-20210526220518-510955/client.crt: no such file or directory
E0526 22:18:22.621786  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/no-preload-20210526220518-510955/client.crt: no such file or directory
E0526 22:18:25.182636  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/no-preload-20210526220518-510955/client.crt: no such file or directory
E0526 22:18:28.940502  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/old-k8s-version-20210526220515-510955/client.crt: no such file or directory
E0526 22:18:30.303284  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/no-preload-20210526220518-510955/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:201: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-20210526221553-510955 --alsologtostderr -v=3: (1m32.51705823s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (92.52s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:247: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:335: "kubernetes-dashboard-968bcb79-pq22n" [b409ea7e-3beb-4794-87bb-c625e02ed46b] Running
start_stop_delete_test.go:247: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.015543323s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:260: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:335: "kubernetes-dashboard-968bcb79-pq22n" [b409ea7e-3beb-4794-87bb-c625e02ed46b] Running
E0526 22:18:40.544549  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/no-preload-20210526220518-510955/client.crt: no such file or directory
start_stop_delete_test.go:260: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.007874935s
start_stop_delete_test.go:264: (dbg) Run:  kubectl --context embed-certs-20210526220606-510955 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:277: (dbg) Run:  out/minikube-linux-amd64 ssh -p embed-certs-20210526220606-510955 "sudo crictl images -o json"
start_stop_delete_test.go:277: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:277: Found non-minikube image: library/busybox:1.28.4-glibc
start_stop_delete_test.go:277: Found non-minikube image: library/minikube-local-cache-test:functional-20210526211257-510955
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.46s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-20210526220606-510955 --alsologtostderr -v=1
E0526 22:18:42.591157  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/kindnet-20210526215016-510955/client.crt: no such file or directory
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-20210526220606-510955 -n embed-certs-20210526220606-510955
start_stop_delete_test.go:284: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-20210526220606-510955 -n embed-certs-20210526220606-510955: exit status 2 (242.093257ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:284: status error: exit status 2 (may be ok)
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-20210526220606-510955 -n embed-certs-20210526220606-510955
start_stop_delete_test.go:284: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-20210526220606-510955 -n embed-certs-20210526220606-510955: exit status 2 (242.864066ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:284: status error: exit status 2 (may be ok)
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-20210526220606-510955 --alsologtostderr -v=1
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-20210526220606-510955 -n embed-certs-20210526220606-510955
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-20210526220606-510955 -n embed-certs-20210526220606-510955
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.46s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:212: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20210526221553-510955 -n newest-cni-20210526221553-510955
start_stop_delete_test.go:212: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20210526221553-510955 -n newest-cni-20210526221553-510955: exit status 7 (73.310185ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:212: status error: exit status 7 (may be ok)
start_stop_delete_test.go:219: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-20210526221553-510955 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.16s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (117.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-20210526221553-510955 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.22.0-alpha.1
E0526 22:19:01.025387  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/no-preload-20210526220518-510955/client.crt: no such file or directory
E0526 22:19:09.901420  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/old-k8s-version-20210526220515-510955/client.crt: no such file or directory
E0526 22:19:17.451089  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/cilium-20210526215017-510955/client.crt: no such file or directory
E0526 22:19:19.840270  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/addons-20210526204012-510955/client.crt: no such file or directory
E0526 22:19:41.986495  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/no-preload-20210526220518-510955/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-20210526221553-510955 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.22.0-alpha.1: (1m56.998466288s)
start_stop_delete_test.go:235: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20210526221553-510955 -n newest-cni-20210526221553-510955
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (117.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:247: (dbg) TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:335: "kubernetes-dashboard-968bcb79-vczb7" [04a1d30c-8501-424d-abfd-486e538525eb] Running
E0526 22:19:58.606965  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/flannel-20210526215016-510955/client.crt: no such file or directory
start_stop_delete_test.go:247: (dbg) TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.014150288s
--- PASS: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop (5.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:260: (dbg) TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:335: "kubernetes-dashboard-968bcb79-vczb7" [04a1d30c-8501-424d-abfd-486e538525eb] Running
start_stop_delete_test.go:260: (dbg) TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.007154811s
start_stop_delete_test.go:264: (dbg) Run:  kubectl --context default-k8s-different-port-20210526220657-510955 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop (5.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:277: (dbg) Run:  out/minikube-linux-amd64 ssh -p default-k8s-different-port-20210526220657-510955 "sudo crictl images -o json"
start_stop_delete_test.go:277: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:277: Found non-minikube image: library/busybox:1.28.4-glibc
start_stop_delete_test.go:277: Found non-minikube image: library/minikube-local-cache-test:functional-20210526211257-510955
--- PASS: TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/Pause (2.55s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/Pause
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-different-port-20210526220657-510955 --alsologtostderr -v=1
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20210526220657-510955 -n default-k8s-different-port-20210526220657-510955
start_stop_delete_test.go:284: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20210526220657-510955 -n default-k8s-different-port-20210526220657-510955: exit status 2 (247.096252ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:284: status error: exit status 2 (may be ok)
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-different-port-20210526220657-510955 -n default-k8s-different-port-20210526220657-510955
start_stop_delete_test.go:284: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-different-port-20210526220657-510955 -n default-k8s-different-port-20210526220657-510955: exit status 2 (250.931122ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:284: status error: exit status 2 (may be ok)
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-different-port-20210526220657-510955 --alsologtostderr -v=1
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20210526220657-510955 -n default-k8s-different-port-20210526220657-510955
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-different-port-20210526220657-510955 -n default-k8s-different-port-20210526220657-510955
--- PASS: TestStartStop/group/default-k8s-different-port/serial/Pause (2.55s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:246: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:257: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:277: (dbg) Run:  out/minikube-linux-amd64 ssh -p newest-cni-20210526221553-510955 "sudo crictl images -o json"
start_stop_delete_test.go:277: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:277: Found non-minikube image: library/minikube-local-cache-test:functional-20210526211257-510955
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.05s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-20210526221553-510955 --alsologtostderr -v=1
E0526 22:20:45.867990  510955 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-11504-506724-773500bc74bd75ddc5ffa547d8fa571191ff1ba1/.minikube/profiles/enable-default-cni-20210526215016-510955/client.crt: no such file or directory
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-20210526221553-510955 -n newest-cni-20210526221553-510955
start_stop_delete_test.go:284: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-20210526221553-510955 -n newest-cni-20210526221553-510955: exit status 2 (234.483628ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:284: status error: exit status 2 (may be ok)
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-20210526221553-510955 -n newest-cni-20210526221553-510955
start_stop_delete_test.go:284: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-20210526221553-510955 -n newest-cni-20210526221553-510955: exit status 2 (234.876803ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:284: status error: exit status 2 (may be ok)
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-20210526221553-510955 --alsologtostderr -v=1
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-20210526221553-510955 -n newest-cni-20210526221553-510955
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-20210526221553-510955 -n newest-cni-20210526221553-510955
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.05s)

                                                
                                    

Test skip (25/260)

x
+
TestDownloadOnly/v1.14.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.14.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.14.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/kubectl
aaa_download_only_test.go:149: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.14.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.2/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.2/kubectl
aaa_download_only_test.go:149: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.0-alpha.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.0-alpha.1/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.22.0-alpha.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.0-alpha.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.0-alpha.1/kubectl
aaa_download_only_test.go:149: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.22.0-alpha.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:207: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:35: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:116: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:189: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:411: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:471: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:96: DNS forwarding is supported for darwin only now, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:96: DNS forwarding is supported for darwin only now, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:96: DNS forwarding is supported for darwin only now, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:39: Only test none driver.
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:43: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:43: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:289: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:88: Skipping the test as containerd container runtimes requires CNI
helpers_test.go:171: Cleaning up "kubenet-20210526215016-510955" profile ...
helpers_test.go:174: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-20210526215016-510955
--- SKIP: TestNetworkPlugins/group/kubenet (0.27s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:91: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:171: Cleaning up "disable-driver-mounts-20210526220657-510955" profile ...
helpers_test.go:174: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-20210526220657-510955
--- SKIP: TestStartStop/group/disable-driver-mounts (0.25s)

                                                
                                    
Copied to clipboard